diff --git a/MANUAL.html b/MANUAL.html index 229718084..71b265008 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@

Rclone

Logo

@@ -283,15 +283,26 @@ two-3.txt: renamed from: two.txt
rclone dedupe rename "drive:Google Photos"
rclone dedupe [mode] remote:path

Options

-
      --dedupe-mode value   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
+
      --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|rename.

rclone authorize

Remote authorization.

Synopsis

Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

rclone authorize
+

rclone cat

+

Concatenates any files and sends them to stdout.

+

Synopsis

+

rclone cat sends any files to standard output.

+

You can use it like this to output a single file

+
rclone cat remote:path/to/file
+

Or like this to output any file in dir or subdirectories.

+
rclone cat remote:path/to/dir
+

Or like this to output any .txt files in dir or subdirectories.

+
rclone --include "*.txt" cat remote:path/to/dir
+
rclone cat remote:path

rclone genautocomplete

Output bash completion script for rclone.

-

Synopsis

+

Synopsis

Generates a bash shell autocompletion script for rclone.

This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg

sudo rclone genautocomplete
@@ -301,9 +312,46 @@ two-3.txt: renamed from: two.txt
rclone genautocomplete [output_file]

rclone gendocs

Output markdown docs for rclone to the directory supplied.

-

Synopsis

+

Synopsis

This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

rclone gendocs output_directory
+

rclone mount

+

Mount the remote as a mountpoint. EXPERIMENTAL

+

Synopsis

+

rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.

+

This is EXPERIMENTAL - use with care.

+

First set up your remote using rclone config. Check it works with rclone ls etc.

+

Start the mount like this

+
rclone mount remote:path/to/files /path/to/local/mount &
+

Stop the mount with

+
fusermount -u /path/to/local/mount
+

Or with OS X

+
umount -u /path/to/local/mount
+

Limitations

+

This can only read files seqentially, or write files sequentially. It can't read and write or seek in files.

+

rclonefs inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.

+

The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path.

+

Only supported on Linux, FreeBSD and OS X at the moment.

+

rclone mount vs rclone sync/copy

+

File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.

+

Bugs

+ +

TODO

+ +
rclone mount remote:path /path/to/mountpoint
+

Options

+
      --debug-fuse   Debug the FUSE internals - needs -v.
+      --no-modtime   Don't read the modification time (can speed things up).

Copying single files

rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.

For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this

@@ -340,7 +388,7 @@ two-3.txt: renamed from: two.txt

This can be used when scripting to make aged backups efficiently, eg

rclone sync remote:current-backup remote:previous-backup
 rclone sync /path/to/files remote:current-backup
-

Options

+

Options

Rclone has a number of options to control its behaviour.

Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

Options which use SIZE use kByte by default. However a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

@@ -1073,8 +1121,21 @@ y/e/d> y -

Limitations

+

Limitations

Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.

+

Making your own client_id

+

When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.

+

However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.

+

Here is how to create your own Google Drive client ID for rclone:

+
    +
  1. Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)

  2. +
  3. Select a project or create a new project.

  4. +
  5. Under Overview, Google APIs, Google Apps APIs, click "Drive API", then "Enable".

  6. +
  7. Click "Credentials" in the left-side panel (not "Go to credentials", which opens the wizard), then "Create credentials", then "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.

  8. +
  9. Choose an application type of "other", and click "Create". (the default name is fine)

  10. +
  11. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.

  12. +
+

(Thanks to @balazer on github for these instructions.)

Amazon S3

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Here is an example of making an s3 configuration. First run

@@ -1184,6 +1245,25 @@ Choose a number from below, or type in your own value 9 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint> 1 +Canned ACL used when creating buckets and/or storing objects in S3. +For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. + \ "public-read" + / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. + 3 | Granting this on a bucket is generally not recommended. + \ "public-read-write" + 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. + \ "authenticated-read" + / Object owner gets FULL_CONTROL. Bucket owner gets READ access. + 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ "bucket-owner-read" + / Both the object owner and the bucket owner get FULL_CONTROL over the object. + 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + \ "bucket-owner-full-control" +acl> private The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None @@ -1427,7 +1507,7 @@ y/e/d> y

Modified time

The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

-

Limitations

+

Limitations

The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

Troubleshooting

Rclone gives Failed to create file system for "remote:": Bad Request

@@ -1510,7 +1590,7 @@ y/e/d> y

Here are the command line options specific to this cloud storage system.

--dropbox-chunk-size=SIZE

Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.

-

Limitations

+

Limitations

Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.

If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbix:dir followed by an rclone rmdir dropbox:dir.

@@ -1703,7 +1783,9 @@ y/e/d> y

Files this size or more will be downloaded via their tempLink. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.

To download files above this threshold, rclone requests a tempLink which downloads the file through a temporary URL directly from the underlying S3 storage.

-

Limitations

+

--acd-upload-wait-time=TIME

+

Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This controls the time rclone waits - 2 minutes by default. You might want to increase the time if you are having problems with very big files. Upload with the -v flag for more info.

+

Limitations

Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.

Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

@@ -1791,7 +1873,7 @@ y/e/d> y

Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.

--onedrive-upload-cutoff=SIZE

Cutoff for switching to chunked upload - must be <= 100MB. The default is 10MB.

-

Limitations

+

Limitations

Note that One Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!

There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

@@ -1871,7 +1953,7 @@ y/e/d> y

The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

Note that Hubic wraps the Swift backend, so most of the properties of are the same.

-

Limitations

+

Limitations

This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

Backblaze B2

@@ -2079,6 +2161,186 @@ y/e/d> y

Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

MD5 checksums

MD5 checksums are natively supported by Yandex Disk.

+

Crypt

+

The crypt remote encrypts and decrypts another remote.

+

To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.

+

First check your chosen remote is working - we'll call it remote:path in these docs. Note that anything inside remote:path will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. If you just use s3: then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.

+

Now configure crypt using rclone config. We will call this one secret to differentiate it from the remote.

+
No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n   
+name> secret
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+   \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+   \ "s3"
+ 3 / Backblaze B2
+   \ "b2"
+ 4 / Dropbox
+   \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+   \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
+   \ "google cloud storage"
+ 7 / Google Drive
+   \ "drive"
+ 8 / Hubic
+   \ "hubic"
+ 9 / Local Disk
+   \ "local"
+10 / Microsoft OneDrive
+   \ "onedrive"
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+   \ "swift"
+12 / Yandex Disk
+   \ "yandex"
+Storage> 5
+Remote to encrypt/decrypt.
+remote> remote:path
+How to encrypt the filenames.
+Choose a number from below, or type in your own value
+ 1 / Don't encrypt the file names.  Adds a ".bin" extension only.
+   \ "off"
+ 2 / Encrypt the filenames see the docs for the details.
+   \ "standard"
+filename_encryption> 2
+Password or pass phrase for encryption.
+y) Yes type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Password or pass phrase for salt. Optional but recommended.
+Should be different to the previous password.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> g
+Password strength in bits.
+64 is just about memorable
+128 is secure
+1024 is the maximum
+Bits> 128
+Your password is: JAsJvRcgR-_veXNfy_sGmQ
+Use this password?
+y) Yes
+n) No
+y/n> y
+Remote config
+--------------------
+[secret]
+remote = remote:path
+filename_encryption = standard
+password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw
+password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+

Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.

+

A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.

+

Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing

+

Example

+

To test I made a little directory of files using "standard" file name encryption.

+
plaintext/
+├── file0.txt
+├── file1.txt
+└── subdir
+    ├── file2.txt
+    ├── file3.txt
+    └── subsubdir
+        └── file4.txt
+

Copy these to the remote and list them back

+
$ rclone -q copy plaintext secret:
+$ rclone -q ls secret:
+        7 file1.txt
+        6 file0.txt
+        8 subdir/file2.txt
+       10 subdir/subsubdir/file4.txt
+        9 subdir/file3.txt
+

Now see what that looked like when encrypted

+
$ rclone -q ls remote:path
+       55 hagjclgavj2mbiqm6u6cnjjqcg
+       54 v05749mltvv1tf4onltun46gls
+       57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
+       58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
+       56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
+

Note that this retains the directory structure which means you can do this

+
$ rclone -q ls secret:subdir
+        8 file2.txt
+        9 file3.txt
+       10 subsubdir/file4.txt
+

If don't use file name encryption then the remote will look like this - note the .bin extensions added to prevent the cloud provider attempting to interpret the data.

+
$ rclone -q ls remote:path
+       54 file0.txt.bin
+       57 subdir/file3.txt.bin
+       56 subdir/file2.txt.bin
+       58 subdir/subsubdir/file4.txt.bin
+       55 file1.txt.bin
+

File name encryption modes

+

Here are some of the features of the file name encryption modes

+

Off * doesn't hide file names or directory structure * allows for longer file names (~246 characters) * can use sub paths and copy single files

+

Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion

+

Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.

+

There may be an even more secure file name encryption mode in the future which will address the long file name problem.

+

File formats

+

File encryption

+

Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.

+ + +

The initial nonce is generated from the operating systems crypto strong random number genrator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is miniscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.

+

Chunk

+

Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.

+

Each chunk contains:

+ +

64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.

+

This uses a 32 byte (256 bit key) key derived from the user password.

+

Examples

+

1 byte file will encrypt to

+ +

49 bytes total

+

1MB (1048576 bytes) file will encrypt to

+ +

1049120 bytes total (a 0.05% overhead). This is the overhead for big files.

+

Name encryption

+

File names are encrypted segment by segment - the path is broken up into / separated strings and these are encrypted individually.

+

File segments are padded using using PKCS#7 to a multiple of 16 bytes before encryption.

+

They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.

+

This makes for determinstic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.

+

This means that

+ +

This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.

+

After encryption they are written out using a modified version of standard base32 encoding as described in RFC4648. The standard encoding is modified in two ways:

+ +

base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).

+

Key derivation

+

Rclone uses scrypt with parameters N=16384, r=8, p=1 with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.

+

scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.

Local Filesystem

Local paths are specified as normal filesystem paths, eg /path/to/wherever, so

rclone sync /home/source /tmp/destination
@@ -2106,6 +2368,42 @@ nounc = true

This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.

Changelog