diff --git a/MANUAL.html b/MANUAL.html index 19d38aa09..826801f3c 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@

Rclone

Logo

@@ -33,6 +33,7 @@
  • Hubic
  • IBM COS S3
  • Memset Memstore
  • +
  • Mega
  • Microsoft Azure Blob Storage
  • Microsoft OneDrive
  • Minio
  • @@ -40,7 +41,7 @@
  • OVH
  • Openstack Swift
  • Oracle Cloud Storage
  • -
  • Ownloud
  • +
  • ownCloud
  • pCloud
  • put.io
  • QingStor
  • @@ -151,6 +152,7 @@ sudo mv rclone /usr/local/bin/
  • Google Drive
  • HTTP
  • Hubic
  • +
  • Mega
  • Microsoft Azure Blob Storage
  • Microsoft OneDrive
  • Openstack Swift / Rackspace Cloudfiles / Memset Memstore
  • @@ -272,6 +274,12 @@ rclone --dry-run --min-size 100M delete remote:path

    List the objects in the path with size and path.

    Synopsis

    Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.

    +

    Eg

    +
    $ rclone ls swift:bucket
    +    60295 bevajer5jef
    +    90613 canole
    +    94467 diwogej7
    +    37600 fubuwic

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone ls remote:path [flags]

    Options

      -h, --help   help for ls

    rclone lsd

    List all directories/containers/buckets in the path.

    Synopsis

    -

    Lists the directories in the source path to standard output. Recurses by default.

    +

    Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse.

    +

    This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg

    +
    $ rclone lsd swift:
    +      494000 2018-04-26 08:43:20     10000 10000files
    +          65 2018-04-26 08:43:20         1 1File
    +

    Or

    +
    $ rclone lsd drive:test
    +          -1 2016-10-17 17:41:53        -1 1000files
    +          -1 2017-01-03 14:40:54        -1 2500files
    +          -1 2017-07-08 14:39:28        -1 4000files
    +

    If you just want the directory names use "rclone lsf --dirs-only".

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsd remote:path [flags]

    Options

    -
      -h, --help   help for lsd
    +
      -h, --help        help for lsd
    +  -R, --recursive   Recurse into the listing.

    rclone lsl

    List the objects in path with modification time, size and path.

    Synopsis

    Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default.

    +

    Eg

    +
    $ rclone lsl swift:bucket
    +    60295 2016-06-25 18:55:41.062626927 bevajer5jef
    +    90613 2016-06-25 18:55:43.302607074 canole
    +    94467 2016-06-25 18:55:43.046609333 diwogej7
    +    37600 2016-06-25 18:55:40.814629136 fubuwic

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsl remote:path [flags]

    Options

      -h, --help   help for lsl
    @@ -345,7 +373,8 @@ rclone --dry-run --min-size 100M delete remote:path

    Prints the total size and number of objects in remote:path.

    rclone size remote:path [flags]

    Options

    -
      -h, --help   help for size
    +
      -h, --help   help for size
    +      --json   format output as JSON

    rclone version

    Show the version number.

    Synopsis

    @@ -415,6 +444,7 @@ two-3.txt: renamed from: two.txt
  • --dedupe-mode first - removes identical files then keeps the first one.
  • --dedupe-mode newest - removes identical files then keeps the newest one.
  • --dedupe-mode oldest - removes identical files then keeps the oldest one.
  • +
  • --dedupe-mode largest - removes identical files then keeps the largest one.
  • --dedupe-mode rename - removes identical files then renames the rest to be different.
  • For example to rename all the identically named photos in your Google Photos directory, do

    @@ -425,23 +455,62 @@ two-3.txt: renamed from: two.txt

    Options

          --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
       -h, --help                 help for dedupe
    +

    rclone about

    +

    Get quota information from the remote.

    +

    Synopsis

    +

    Get quota information from the remote, like bytes used/free/quota and bytes used in the trash. Not supported by all remotes.

    +

    This will print to stdout something like this:

    +
    Total:   17G
    +Used:    7.444G
    +Free:    1.315G
    +Trashed: 100.000M
    +Other:   8.241G
    +

    Where the fields are:

    + +

    Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted.

    +

    Use the --full flag to see the numbers written out in full, eg

    +
    Total:   18253611008
    +Used:    7993453766
    +Free:    1411001220
    +Trashed: 104857602
    +Other:   8849156022
    +

    Use the --json flag for a computer readable output, eg

    +
    {
    +    "total": 18253611008,
    +    "used": 7993453766,
    +    "trashed": 104857602,
    +    "other": 8849156022,
    +    "free": 1411001220
    +}
    +
    rclone about remote: [flags]
    +

    Options

    +
          --full   Full numbers instead of SI units
    +  -h, --help   help for about
    +      --json   Format output as JSON

    rclone authorize

    Remote authorization.

    -

    Synopsis

    +

    Synopsis

    Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

    rclone authorize [flags]
    -

    Options

    +

    Options

      -h, --help   help for authorize

    rclone cachestats

    Print cache stats for a remote

    -

    Synopsis

    +

    Synopsis

    Print cache stats for a remote in JSON format

    rclone cachestats source: [flags]
    -

    Options

    +

    Options

      -h, --help   help for cachestats

    rclone cat

    Concatenates any files and sends them to stdout.

    -

    Synopsis

    +

    Synopsis

    rclone cat sends any files to standard output.

    You can use it like this to output a single file

    rclone cat remote:path/to/file
    @@ -451,7 +520,7 @@ two-3.txt: renamed from: two.txt
    rclone --include "*.txt" cat remote:path/to/dir

    Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.

    rclone cat remote:path [flags]
    -

    Options

    +

    Options

          --count int    Only print N characters. (default -1)
           --discard      Discard the output instead of printing.
           --head int     Only print the first N characters.
    @@ -460,76 +529,76 @@ two-3.txt: renamed from: two.txt
    --tail int Only print the last N characters.

    rclone config create

    Create a new remote with name, type and options.

    -

    Synopsis

    +

    Synopsis

    Create a new remote of with and options. The options should be passed in in pairs of .

    For example to make a swift remote of name myremote using auto config you would do:

    rclone config create myremote swift env_auth true
    rclone config create <name> <type> [<key> <value>]* [flags]
    -

    Options

    +

    Options

      -h, --help   help for create

    rclone config delete

    Delete an existing remote .

    -

    Synopsis

    +

    Synopsis

    Delete an existing remote .

    rclone config delete <name> [flags]
    -

    Options

    +

    Options

      -h, --help   help for delete

    rclone config dump

    Dump the config file as JSON.

    -

    Synopsis

    +

    Synopsis

    Dump the config file as JSON.

    rclone config dump [flags]
    -

    Options

    +

    Options

      -h, --help   help for dump

    rclone config edit

    Enter an interactive configuration session.

    -

    Synopsis

    +

    Synopsis

    Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

    rclone config edit [flags]
    -

    Options

    +

    Options

      -h, --help   help for edit

    rclone config file

    Show path of configuration file in use.

    -

    Synopsis

    +

    Synopsis

    Show path of configuration file in use.

    rclone config file [flags]
    -

    Options

    +

    Options

      -h, --help   help for file

    rclone config password

    Update password in an existing remote.

    -

    Synopsis

    +

    Synopsis

    Update an existing remote's password. The password should be passed in in pairs of .

    For example to set password of a remote of name myremote you would do:

    rclone config password myremote fieldname mypassword
    rclone config password <name> [<key> <value>]+ [flags]
    -

    Options

    +

    Options

      -h, --help   help for password

    rclone config providers

    List in JSON format all the providers and options.

    -

    Synopsis

    +

    Synopsis

    List in JSON format all the providers and options.

    rclone config providers [flags]
    -

    Options

    +

    Options

      -h, --help   help for providers

    rclone config show

    Print (decrypted) config file, or the config for a single remote.

    -

    Synopsis

    +

    Synopsis

    Print (decrypted) config file, or the config for a single remote.

    rclone config show [<remote>] [flags]
    -

    Options

    +

    Options

      -h, --help   help for show

    rclone config update

    Update options in an existing remote.

    -

    Synopsis

    +

    Synopsis

    Update an existing remote's options. The options should be passed in in pairs of .

    For example to update the env_auth field of a remote of name myremote you would do:

    rclone config update myremote swift env_auth true
    rclone config update <name> [<key> <value>]+ [flags]
    -

    Options

    +

    Options

      -h, --help   help for update

    rclone copyto

    Copy files from source to dest, skipping already copied

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it copies it to a file or directory named dest:path.

    This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.

    So

    @@ -543,11 +612,11 @@ if src is directory see copy command for full details

    This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.

    rclone copyto source:path dest:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for copyto

    rclone cryptcheck

    Cryptcheck checks the integrity of a crypted remote.

    -

    Synopsis

    +

    Synopsis

    rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.

    For it to work the underlying remote of the cryptedremote must support some kind of checksum.

    It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.

    @@ -557,11 +626,11 @@ if src is directory
    rclone cryptcheck remote:path encryptedremote:path

    After it has run it will log the status of the encryptedremote:.

    rclone cryptcheck remote:path cryptedremote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for cryptcheck

    rclone cryptdecode

    Cryptdecode returns unencrypted file names.

    -

    Synopsis

    +

    Synopsis

    rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

    If you supply the --reverse flag, it will return encrypted file names.

    use it like this

    @@ -569,25 +638,25 @@ if src is directory rclone cryptdecode --reverse encryptedremote: filename1 filename2
    rclone cryptdecode encryptedremote: encryptedfilename [flags]
    -

    Options

    +

    Options

      -h, --help      help for cryptdecode
           --reverse   Reverse cryptdecode, encrypts filenames

    rclone dbhashsum

    Produces a Dropbox hash file for all the objects in the path.

    -

    Synopsis

    +

    Synopsis

    Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.

    rclone dbhashsum remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for dbhashsum

    rclone genautocomplete

    Output completion script for a given shell.

    -

    Synopsis

    +

    Synopsis

    Generates a shell completion script for rclone. Run with --help to list the supported shells.

    -

    Options

    +

    Options

      -h, --help   help for genautocomplete

    rclone genautocomplete bash

    Output bash completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a bash shell autocompletion script for rclone.

    This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg

    sudo rclone genautocomplete bash
    @@ -595,11 +664,11 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
    . /etc/bash_completion

    If you supply a command line argument the script will be written there.

    rclone genautocomplete bash [output_file] [flags]
    -

    Options

    +

    Options

      -h, --help   help for bash

    rclone genautocomplete zsh

    Output zsh completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a zsh autocompletion script for rclone.

    This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg

    sudo rclone genautocomplete zsh
    @@ -607,39 +676,93 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
    autoload -U compinit && compinit

    If you supply a command line argument the script will be written there.

    rclone genautocomplete zsh [output_file] [flags]
    -

    Options

    +

    Options

      -h, --help   help for zsh

    rclone gendocs

    Output markdown docs for rclone to the directory supplied.

    -

    Synopsis

    +

    Synopsis

    This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

    rclone gendocs output_directory [flags]
    -

    Options

    +

    Options

      -h, --help   help for gendocs
    +

    rclone hashsum

    +

    Produces an hashsum file for all the objects in the path.

    +

    Synopsis

    +

    Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

    +

    Run without a hash to see the list of supported hashes, eg

    +
    $ rclone hashsum
    +Supported hashes are:
    +  * MD5
    +  * SHA-1
    +  * DropboxHash
    +  * QuickXorHash
    +

    Then

    +
    $ rclone hashsum MD5 remote:path
    +
    rclone hashsum <hash> remote:path [flags]
    +

    Options

    +
      -h, --help   help for hashsum
    + +

    Generate public link to file/folder.

    +

    Synopsis

    +

    rclone link will create or retrieve a public link to the given file or folder.

    +
    rclone link remote:path/to/file
    +rclone link remote:path/to/folder/
    +

    If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.

    +
    rclone link remote:path [flags]
    +

    Options

    +
      -h, --help   help for link

    rclone listremotes

    List all the remotes in the config file.

    -

    Synopsis

    +

    Synopsis

    rclone listremotes lists all the available remotes from the config file.

    When uses with the -l flag it lists the types too.

    rclone listremotes [flags]
    -

    Options

    +

    Options

      -h, --help   help for listremotes
       -l, --long   Show the type as well as names.

    rclone lsf

    List directories and objects in remote:path formatted for parsing

    -

    Synopsis

    +

    Synopsis

    List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.

    +

    Eg

    +
    $ rclone lsf swift:bucket
    +bevajer5jef
    +canole
    +diwogej7
    +ferejej3gux/
    +fubuwic

    Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

    p - path
     s - size
     t - modification time
     h - hash

    So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

    +

    Eg

    +
    $ rclone lsf  --format "tsp" swift:bucket
    +2016-06-25 18:55:41;60295;bevajer5jef
    +2016-06-25 18:55:43;90613;canole
    +2016-06-25 18:55:43;94467;diwogej7
    +2018-04-26 08:50:45;0;ferejej3gux/
    +2016-06-25 18:55:40;37600;fubuwic

    If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.

    For example to emulate the md5sum command you can use

    rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .
    +

    Eg

    +
    $ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket 
    +7908e352297f0f530b84a756f188baa3  bevajer5jef
    +cd65ac234e6fea5925974a51cdd865cc  canole
    +03b5341b4f234b9d984d03ad076bae91  diwogej7
    +8fd37c3810dd660778137ac3a66cc06d  fubuwic
    +99713e14a4c4ff553acaf1930fad985b  gixacuh7ku

    (Though "rclone md5sum ." is an easier way of typing this.)

    By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

    +

    Eg

    +
    $ rclone lsf  --separator "," --format "tshp" swift:bucket
    +2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
    +2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
    +2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
    +2018-04-26 08:52:53,0,,ferejej3gux/
    +2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsf remote:path [flags]
    -

    Options

    +

    Options

      -d, --dir-slash          Append a slash to directory names. (default true)
           --dirs-only          Only list directories.
           --files-only         Only list files.
    @@ -664,7 +788,7 @@ h - hash
    -s, --separator string Separator for the items in the format. (default ";")

    rclone lsjson

    List directories and objects in the path in JSON format.

    -

    Synopsis

    +

    Synopsis

    List directories and objects in the path in JSON format.

    The output is an array of Items, where each Item looks like this

    { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }

    @@ -684,10 +808,11 @@ h - hash
  • lsjson to list objects and directories in JSON format
  • ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsjson remote:path [flags]
    -

    Options

    +

    Options

      -M, --encrypted    Show the encrypted names.
           --hash         Include hashes in the output (may take longer).
       -h, --help         help for lsjson
    @@ -695,7 +820,7 @@ h - hash
    -R, --recursive Recurse into the listing.

    rclone mount

    Mount the remote as a mountpoint. EXPERIMENTAL

    -

    Synopsis

    +

    Synopsis

    rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

    This is EXPERIMENTAL - use with care.

    First set up your remote using rclone config. Check it works with rclone ls etc.

    @@ -723,8 +848,11 @@ umount /path/to/local/mount

    File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the EXPERIMENTAL file caching for solutions to make mount mount more reliable.

    Attribute caching

    You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.

    -

    The default is 0s - no caching - which is recommended for filesystems which can change outside the control of the kernel.

    -

    If you set it higher ('1s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there may be strange effects when files change on the remote.

    +

    The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel.

    +

    In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories.

    +

    The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.

    +

    If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.

    +

    If files don't change on the remote outside of the control of rclone then there is no chance of corruption.

    This is the same as setting the attr_timeout option in mount.fuse.

    Filters

    Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

    @@ -782,11 +910,11 @@ umount /path/to/local/mount

    This mode should support all normal file system operations.

    If an upload or download fails it will be retried up to --low-level-retries times.

    rclone mount remote:path /path/to/mountpoint [flags]
    -

    Options

    +

    Options

          --allow-non-empty                    Allow mounting over a non-empty directory.
           --allow-other                        Allow access to other users.
           --allow-root                         Allow access to root user.
    -      --attr-timeout duration              Time for which file/directory attributes are cached.
    +      --attr-timeout duration              Time for which file/directory attributes are cached. (default 1s)
           --daemon                             Run mount as a daemon (background mode).
           --debug-fuse                         Debug the FUSE internals - needs -v.
           --default-permissions                Makes kernel enforce access control based on the file mode.
    @@ -809,7 +937,7 @@ umount /path/to/local/mount
    --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

    rclone moveto

    Move file or directory from source to dest.

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it moves it to a file or directory named dest:path.

    This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.

    So

    @@ -824,11 +952,11 @@ if src is directory

    This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

    Important: Since this can cause data loss, test first with the --dry-run flag.

    rclone moveto source:path dest:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for moveto

    rclone ncdu

    Explore a remote with a text based user interface.

    -

    Synopsis

    +

    Synopsis

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

    To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

    @@ -839,34 +967,35 @@ if src is directory c toggle counts g toggle graph n,s,C sort by name,size,count + ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit

    This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands.

    rclone ncdu remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for ncdu

    rclone obscure

    Obscure password for use in the rclone.conf

    -

    Synopsis

    +

    Synopsis

    Obscure password for use in the rclone.conf

    rclone obscure password [flags]
    -

    Options

    +

    Options

      -h, --help   help for obscure

    rclone rc

    Run a command against a running rclone.

    -

    Synopsis

    +

    Synopsis

    This runs a command against a running rclone. By default it will use that specified in the --rc-addr command.

    Arguments should be passed in as parameter=value.

    The result will be returned as a JSON object by default.

    Use "rclone rc list" to see a list of all possible commands.

    rclone rc commands parameter [flags]
    -

    Options

    +

    Options

      -h, --help         help for rc
           --no-output    If set don't output the JSON result.
           --url string   URL to connect to rclone remote control. (default "http://localhost:5572/")

    rclone rcat

    Copies standard input to file on remote.

    -

    Synopsis

    +

    Synopsis

    rclone rcat reads from standard input (stdin) and copies it to a single remote file.

    echo "hello world" | rclone rcat remote:path/to/file
     ffmpeg - | rclone rcat --checksum remote:path/to/file
    @@ -874,37 +1003,37 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file

    rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.

    Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination.

    rclone rcat remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for rcat

    rclone rmdirs

    Remove empty directories under the path.

    -

    Synopsis

    +

    Synopsis

    This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.

    If you supply the --leave-root flag, it will not remove the root directory.

    This is useful for tidying up remotes that rclone has left a lot of empty directories in.

    rclone rmdirs remote:path [flags]
    -

    Options

    +

    Options

      -h, --help         help for rmdirs
           --leave-root   Do not remove root directory if empty

    rclone serve

    Serve a remote over a protocol.

    -

    Synopsis

    +

    Synopsis

    rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg

    rclone serve http remote:

    Each subcommand has its own options which you can see in their help.

    rclone serve <protocol> [opts] <remote> [flags]
    -

    Options

    +

    Options

      -h, --help   help for serve

    rclone serve http

    Serve the remote over HTTP.

    -

    Synopsis

    +

    Synopsis

    rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

    You can use the filter flags (eg --include, --exclude) to control what is served.

    The server will log errors. Use -v to see access logs.

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.

    +

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    @@ -972,7 +1101,7 @@ htpasswd -B htpasswd anotherUser

    This mode should support all normal file system operations.

    If an upload or download fails it will be retried up to --low-level-retries times.

    rclone serve http remote:path [flags]
    -

    Options

    +

    Options

          --addr string                        IPaddress:Port or :Port to bind server to. (default "localhost:8080")
           --cert string                        SSL PEM key (concatenation of certificate and CA certificate)
           --client-ca string                   Client certificate authority to verify clients with
    @@ -999,7 +1128,7 @@ htpasswd -B htpasswd anotherUser
    --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)

    rclone serve restic

    Serve the remote for restic's REST API.

    -

    Synopsis

    +

    Synopsis

    rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    Restic is a command line program for doing backups.

    The server will log errors. Use -v to see access logs.

    @@ -1038,8 +1167,8 @@ snapshot 45c8fdd8 saved $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.

    +

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    @@ -1056,8 +1185,9 @@ htpasswd -B htpasswd anotherUser

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    rclone serve restic remote:path [flags]
    -

    Options

    +

    Options

          --addr string                     IPaddress:Port or :Port to bind server to. (default "localhost:8080")
    +      --append-only                     disallow deletion of repository data
           --cert string                     SSL PEM key (concatenation of certificate and CA certificate)
           --client-ca string                Client certificate authority to verify clients with
       -h, --help                            help for restic
    @@ -1072,12 +1202,12 @@ htpasswd -B htpasswd anotherUser
    --user string User name for authentication.

    rclone serve webdav

    Serve remote:path over webdav.

    -

    Synopsis

    +

    Synopsis

    rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.

    NB at the moment each directory listing reads the start of each file which is undesirable: see https://github.com/golang/go/issues/22577

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.

    +

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    @@ -1145,7 +1275,7 @@ htpasswd -B htpasswd anotherUser

    This mode should support all normal file system operations.

    If an upload or download fails it will be retried up to --low-level-retries times.

    rclone serve webdav remote:path [flags]
    -

    Options

    +

    Options

          --addr string                        IPaddress:Port or :Port to bind server to. (default "localhost:8080")
           --cert string                        SSL PEM key (concatenation of certificate and CA certificate)
           --client-ca string                   Client certificate authority to verify clients with
    @@ -1172,16 +1302,16 @@ htpasswd -B htpasswd anotherUser
    --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)

    rclone touch

    Create new file or change file modification time.

    -

    Synopsis

    +

    Synopsis

    Create new file or change file modification time.

    rclone touch remote:path [flags]
    -

    Options

    +

    Options

      -h, --help               help for touch
       -C, --no-create          Do not create the file if it does not exist.
       -t, --timestamp string   Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)

    rclone tree

    List the contents of the remote in a tree like fashion.

    -

    Synopsis

    +

    Synopsis

    rclone tree lists the contents of a remote in a similar way to the unix tree command.

    For example

    $ rclone tree remote:path
    @@ -1197,7 +1327,7 @@ htpasswd -B htpasswd anotherUser

    You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.

    The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.

    rclone tree remote:path [flags]
    -

    Options

    +

    Options

      -a, --all             All files are listed (list . files too).
       -C, --color           Turn colorization on always.
       -d, --dirs-only       List directories only.
    @@ -1261,10 +1391,10 @@ htpasswd -B htpasswd anotherUser

    This can be used when scripting to make aged backups efficiently, eg

    rclone sync remote:current-backup remote:previous-backup
     rclone sync /path/to/files remote:current-backup
    -

    Options

    +

    Options

    Rclone has a number of options to control its behaviour.

    Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

    -

    Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

    +

    Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

    --backup-dir=DIR

    When using sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.

    If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.

    @@ -1343,6 +1473,7 @@ rclone sync /path/to/files remote:current-backup

    During rmdirs it will not remove root directory, even if it's empty.

    --log-file=FILE

    Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

    +

    Note that if you are using the logrotate program to manage rclone's logs, then you should use the copytruncate option as rclone doesn't have a signal to rotate logs.

    --log-level LEVEL

    This sets the log level for rclone. The default log level is NOTICE.

    DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.

    @@ -1452,6 +1583,9 @@ rclone sync /path/to/files remote:current-backup

    If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.

    On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.

    This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.

    +

    --use-server-modtime

    +

    Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.

    +

    Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.

    -v, -vv, --verbose

    With -v rclone will tell you about each file that is transferred and a small number of significant events.

    With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.

    @@ -1519,6 +1653,10 @@ export RCLONE_CONFIG_PASS

    Dump HTTP headers - will contain sensitive info such as Authorization: headers - use --dump headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only.

    --dump filters

    Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.

    +

    --dump goroutines

    +

    This dumps a list of the running go-routines at the end of the command to standard output.

    +

    --dump openfiles

    +

    This dumps a list of the open files at the end of the command. It uses the lsof command to do that so you'll need that installed to use it.

    --memprofile=FILE

    Write memory profile to file. This can be analysed with go tool pprof.

    --no-check-certificate=true/false

    @@ -1578,7 +1716,7 @@ export RCLONE_CONFIG_PASS

    Environment Variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    -

    Options

    +

    Options

    Every option in rclone can have its default set by environment variable.

    To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

    For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

    @@ -1925,29 +2063,44 @@ dir1/dir2/dir3/.ignore }

    Run rclone rc on its own to see the help for the installed remote control commands.

    Supported commands

    + +

    cache/expire: Purge a remote from cache

    +

    Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional)

    +

    Eg

    +
    rclone rc cache/expire remote=path/to/sub/folder/
    +rclone rc cache/expire remote=/ withData=true
    +

    cache/stats: Get cache stats

    +

    Show statistics for the cache remote.

    core/bwlimit: Set the bandwidth limit.

    This sets the bandwidth limit to that passed in.

    Eg

    -
    rclone core/bwlimit rate=1M
    -rclone core/bwlimit rate=off
    -

    cache/expire: Purge a remote from cache

    -

    Purge a remote from the cache backend. Supports either a directory or a file. Params:

    +
    rclone rc core/bwlimit rate=1M
    +rclone rc core/bwlimit rate=off
    +

    The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.

    +

    core/memstats: Returns the memory statistics

    +

    This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats

    +

    The most interesting values for most people are:

    +

    core/pid: Return PID of current process

    +

    This returns PID of current process. Useful for stopping rclone process.

    +

    rc/error: This returns an error

    +

    This returns an error with the input as part of its error string. Useful for testing error handling.

    +

    rc/list: List all the registered remote control commands

    +

    This lists all the registered remote control commands as a JSON map in the commands response.

    +

    rc/noop: Echo the input to the output parameters

    +

    This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

    vfs/forget: Forget files or directories in the directory cache.

    This forgets the paths in the directory cache causing them to be re-read from the remote when needed.

    If no paths are passed in then it will forget all the paths in the directory cache.

    rclone rc vfs/forget

    Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg

    rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
    -

    rc/noop: Echo the input to the output parameters

    -

    This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

    -

    rc/error: This returns an error

    -

    This returns an error with the input as part of its error string. Useful for testing error handling.

    -

    rc/list: List all the registered remote control commands

    -

    This lists all the registered remote control commands as a JSON map in the commands response.

    +

    Accessing the remote control via HTTP

    Rclone implements a simple HTTP based protocol.

    Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.

    @@ -2103,6 +2256,14 @@ $ echo $? R/W +Mega +- +No +No +Yes +- + + Microsoft Azure Blob Storage MD5 Yes @@ -2110,15 +2271,15 @@ $ echo $? No R/W - + Microsoft OneDrive -SHA1 +SHA1 ‡‡ Yes Yes No R - + Openstack Swift MD5 Yes @@ -2126,7 +2287,7 @@ $ echo $? No R/W - + pCloud MD5, SHA1 Yes @@ -2134,7 +2295,7 @@ $ echo $? No W - + QingStor MD5 No @@ -2142,7 +2303,7 @@ $ echo $? No R/W - + SFTP MD5, SHA1 ‡ Yes @@ -2150,7 +2311,7 @@ $ echo $? No - - + WebDAV - Yes †† @@ -2158,7 +2319,7 @@ $ echo $? No - - + Yandex Disk MD5 Yes @@ -2166,7 +2327,7 @@ $ echo $? No R/W - + The local filesystem All Yes @@ -2182,6 +2343,7 @@ $ echo $?

    † Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.

    ‡ SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

    †† WebDAV supports modtimes when used with Owncloud and Nextcloud only.

    +

    ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash.

    ModTime

    The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

    All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.

    @@ -2216,6 +2378,8 @@ $ echo $? CleanUp ListR StreamUpload +LinkSharing +About @@ -2228,6 +2392,8 @@ $ echo $? No #575 No No +No #2178 +No Amazon S3 @@ -2238,6 +2404,8 @@ $ echo $? No Yes Yes +No #2178 +No Backblaze B2 @@ -2248,6 +2416,8 @@ $ echo $? Yes Yes Yes +No #2178 +No Box @@ -2258,6 +2428,8 @@ $ echo $? No #575 No Yes +No #2178 +No Dropbox @@ -2268,6 +2440,8 @@ $ echo $? No #575 No Yes +Yes +Yes FTP @@ -2278,6 +2452,8 @@ $ echo $? No No Yes +No #2178 +No Google Cloud Storage @@ -2288,6 +2464,8 @@ $ echo $? No Yes Yes +No #2178 +No Google Drive @@ -2298,6 +2476,8 @@ $ echo $? Yes No Yes +Yes +Yes HTTP @@ -2308,6 +2488,8 @@ $ echo $? No No No +No #2178 +No Hubic @@ -2318,8 +2500,22 @@ $ echo $? No Yes Yes +No #2178 +Yes +Mega +Yes +No +Yes +Yes +No +No +No +No #2178 +Yes + + Microsoft Azure Blob Storage Yes Yes @@ -2328,8 +2524,10 @@ $ echo $? No Yes No +No #2178 +No - + Microsoft OneDrive Yes Yes @@ -2338,8 +2536,10 @@ $ echo $? No #575 No No +No #2178 +Yes - + Openstack Swift Yes † Yes @@ -2348,8 +2548,10 @@ $ echo $? No Yes Yes +No #2178 +Yes - + pCloud Yes Yes @@ -2358,8 +2560,10 @@ $ echo $? Yes No No +No #2178 +Yes - + QingStor No Yes @@ -2368,8 +2572,10 @@ $ echo $? No Yes No +No #2178 +No - + SFTP No No @@ -2378,8 +2584,10 @@ $ echo $? No No Yes +No #2178 +No - + WebDAV Yes Yes @@ -2388,8 +2596,10 @@ $ echo $? No No Yes ‡ +No #2178 +No - + Yandex Disk Yes No @@ -2398,8 +2608,10 @@ $ echo $? Yes Yes Yes +No #2178 +No - + The local filesystem Yes No @@ -2408,6 +2620,8 @@ $ echo $? No No Yes +No +Yes @@ -2430,6 +2644,11 @@ $ echo $?

    The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details.

    StreamUpload

    Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat.

    +

    LinkSharing

    +

    Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.

    +

    About

    +

    This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash.

    +

    If the server can't do About then rclone about will return an error.

    Alias

    The alias remote provides a new name for another remote.

    Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory.

    @@ -2639,8 +2858,28 @@ y/e/d> y

    Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

    At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.

    Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

    -

    Amazon S3

    +

    Amazon S3 Storage Providers

    +

    The S3 backend can be used with a number of different providers:

    +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    +

    Once you have made a remote (see the provider specific section above) you can use it like this:

    +

    See all buckets

    +
    rclone lsd remote:
    +

    Make a new bucket

    +
    rclone mkdir remote:bucket
    +

    List the contents of a bucket

    +
    rclone ls remote:bucket
    +

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    +
    rclone sync /home/local/directory remote:bucket
    +

    AWS S3

    Here is an example of making an s3 configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -2656,7 +2895,7 @@ Choose a number from below, or type in your own value \ "alias" 2 / Amazon Drive \ "amazon cloud drive" - 3 / Amazon S3 (also Dreamhost, Ceph, Minio) + 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) \ "s3" 4 / Backblaze B2 \ "b2" @@ -2664,6 +2903,25 @@ Choose a number from below, or type in your own value 23 / http Connection \ "http" Storage> s3 +Choose your S3 provider. +Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + 2 / Ceph Object Storage + \ "Ceph" + 3 / Digital Ocean Spaces + \ "DigitalOcean" + 4 / Dreamhost DreamObjects + \ "Dreamhost" + 5 / IBM COS S3 + \ "IBMCOS" + 6 / Minio Object Storage + \ "Minio" + 7 / Wasabi Object Storage + \ "Wasabi" + 8 / Any other S3 compatible provider + \ "Other" +provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step @@ -2675,7 +2933,7 @@ AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY -Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. +Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. @@ -2720,13 +2978,9 @@ Choose a number from below, or type in your own value / South America (Sao Paulo) Region 14 | Needs location constraint sa-east-1. \ "sa-east-1" - / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. -15 | Set this and make sure you set the endpoint. - \ "other-v2-signature" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. -Specify if using an S3 clone such as Ceph. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value @@ -2795,10 +3049,14 @@ Choose a number from below, or type in your own value \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" + 5 / One Zone Infrequent Access storage class + \ "ONEZONE_IA" storage_class> 1 Remote config -------------------- [remote] +type = s3 +provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY @@ -2812,18 +3070,12 @@ storage_class = y) Yes this is OK e) Edit this remote d) Delete this remote -y/e/d> y -

    This remote is called remote and can now be used like this

    -

    See all buckets

    -
    rclone lsd remote:
    -

    Make a new bucket

    -
    rclone mkdir remote:bucket
    -

    List the contents of a bucket

    -
    rclone ls remote:bucket
    -

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    -
    rclone sync /home/local/directory remote:bucket
    +y/e/d>

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    --update and --use-server-modtime

    +

    As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

    +

    For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

    Modified time

    The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    Multipart uploads

    @@ -2831,20 +3083,30 @@ y/e/d> y

    Buckets and Regions

    With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

    Authentication

    -

    There are two ways to supply rclone with a set of AWS credentials. In order of precedence:

    +

    There are a number of ways to supply rclone with a set of AWS credentials, with and without using the environment.

    +

    The different authentication methods are tried in this order:

    If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).

    S3 Permissions

    @@ -2903,41 +3165,27 @@ y/e/d> y +

    --s3-chunk-size=SIZE

    +

    Any files larger than this will be uploaded in chunks of this size. The default is 5MB. The minimum is 5MB.

    +

    Note that 2 chunks of this size are buffered in memory per transfer.

    +

    If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.

    Anonymous access to public buckets

    -

    If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Eg

    -
    No remotes found - make a new one
    -n) New remote
    -q) Quit config
    -n/q> n
    -name> anons3
    -What type of source is it?
    -Choose a number from below
    - 1) amazon cloud drive
    - 2) b2
    - 3) drive
    - 4) dropbox
    - 5) google cloud storage
    - 6) swift
    - 7) hubic
    - 8) local
    - 9) onedrive
    -10) s3
    -11) yandex
    -type> 10
    -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
    -Choose a number from below, or type in your own value
    - * Enter AWS credentials in the next step
    - 1) false
    - * Get AWS credentials from the environment (env vars or IAM)
    - 2) true
    -env_auth> 1
    -AWS Access Key ID - leave blank for anonymous access or runtime credentials.
    -access_key_id>
    -AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    -secret_access_key>
    -...
    +

    If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Your config should end up looking like this:

    +
    [anons3]
    +type = s3
    +provider = AWS
    +env_auth = false
    +access_key_id = 
    +secret_access_key = 
    +region = us-east-1
    +endpoint = 
    +location_constraint = 
    +acl = private
    +server_side_encryption = 
    +storage_class = 

    Then use it as normal with the name of the public bucket, eg

    rclone lsd anons3:1000genomes

    You will be able to list and copy data but not upload it.

    @@ -2946,15 +3194,16 @@ secret_access_key>

    To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

    [ceph]
     type = s3
    +provider = Ceph
     env_auth = false
     access_key_id = XXX
     secret_access_key = YYY
    -region = 
    +region =
     endpoint = https://ceph.endpoint.example.com
    -location_constraint = 
    -acl = 
    -server_side_encryption = 
    -storage_class = 
    +location_constraint = +acl = +server_side_encryption = +storage_class =

    Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.

    Eg the dump from Ceph looks something like this (irrelevant keys removed).

    {
    @@ -2973,6 +3222,8 @@ storage_class = 

    Dreamhost DreamObjects is an object storage system based on CEPH.

    To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

    [dreamobjects]
    +type = s3
    +provider = DreamHost
     env_auth = false
     access_key_id = your_access_key
     secret_access_key = your_secret_key
    @@ -2991,28 +3242,29 @@ storage_class =
    env_auth> 1 access_key_id> YOUR_ACCESS_KEY secret_access_key> YOUR_SECRET_KEY -region> +region> endpoint> nyc3.digitaloceanspaces.com -location_constraint> -acl> -storage_class> +location_constraint> +acl> +storage_class>

    The resulting configuration file should look like:

    [spaces]
     type = s3
    +provider = DigitalOcean
     env_auth = false
     access_key_id = YOUR_ACCESS_KEY
     secret_access_key = YOUR_SECRET_KEY
    -region = 
    +region =
     endpoint = nyc3.digitaloceanspaces.com
    -location_constraint = 
    -acl = 
    -server_side_encryption = 
    -storage_class = 
    +location_constraint = +acl = +server_side_encryption = +storage_class =

    Once configured, you can create a new Space and begin copying files. For example:

    rclone mkdir spaces:my-new-space
     rclone copy /path/to/files spaces:my-new-space

    IBM COS (S3)

    -

    Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (https://www.ibm.com/cloud/object-storage)

    +

    Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)

    To configure access to IBM COS S3, follow the steps below:

    1. Run rclone config and select n for a new remote.

      @@ -3023,136 +3275,116 @@ s) Set configuration password q) Quit config n/s/q> n
    2. Enter the name for the configuration

      -
      name> IBM-COS-XREGION
    3. +
      name> <YOUR NAME>
    4. Select "s3" storage.

      -
      Type of storage to configure.
      -Choose a number from below, or type in your own value
      - 1 / Amazon Drive
      +
      Choose a number from below, or type in your own value
      +1 / Alias for a existing remote
      +\ "alias"
      +2 / Amazon Drive
       \ "amazon cloud drive"
      -2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3))
      +3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
       \ "s3"
      -3 / Backblaze B2
      -Storage> 2
    5. -
    6. Select "Enter AWS credentials…"

      -
      Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
      +4 / Backblaze B2
      +\ "b2"
      +[snip]
      +23 / http Connection
      +\ "http"
      +Storage> 3
    7. +
    8. Select IBM COS as the S3 Storage Provider.

      +
      Choose the S3 provider.
       Choose a number from below, or type in your own value
      - 1 / Enter AWS credentials in the next step
      -\ "false"
      - 2 / Get AWS credentials from the environment (env vars or IAM)
      -\ "true"
      -env_auth> 1
    9. + 1 / Choose this option to configure Storage to AWS S3 + \ "AWS" + 2 / Choose this option to configure Storage to Ceph Systems + \ "Ceph" + 3 / Choose this option to configure Storage to Dreamhost + \ "Dreamhost" + 4 / Choose this option to the configure Storage to IBM COS S3 + \ "IBMCOS" + 5 / Choose this option to the configure Storage to Minio + \ "Minio" + Provider>4
    10. Enter the Access Key and Secret.

      AWS Access Key ID - leave blank for anonymous access or runtime credentials.
       access_key_id> <>
       AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
       secret_access_key> <>
    11. -
    12. Select "other-v4-signature" region.

      -
      Region to connect to.
      +
    13. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.

      +
      Endpoint for IBM COS S3 API.
      +Specify if using an IBM COS On Premise.
       Choose a number from below, or type in your own value
      -/ The default endpoint - a good choice if you are unsure.
      - 1 | US Region, Northern Virginia or Pacific Northwest.
      -| Leave location constraint empty.
      -\ "us-east-1"
      -/ US East (Ohio) Region
      -2 | Needs location constraint us-east-2.
      -\ "us-east-2"
      -/ US West (Oregon) Region
      -…<omitted>…
      -15 | eg Ceph/Dreamhost
      -| set this and make sure you set the endpoint.
      -\ "other-v2-signature"
      -/ If using an S3 clone that understands v4 signatures set this
      -16 | and make sure you set the endpoint.
      -\ "other-v4-signature
      -region> 16
    14. -
    15. Enter the endpoint FQDN.

      -
      Leave blank if using AWS to use the default endpoint for the region.
      -Specify if using an S3 clone such as Ceph.
      -endpoint> s3-api.us-geo.objectstorage.softlayer.net
    16. -
    17. Specify a IBM COS Location Constraint. -
        -
      1. Currently, the only IBM COS values for LocationConstraint are: us-standard / us-vault / us-cold / us-flex us-east-standard / us-east-vault / us-east-cold / us-east-flex us-south-standard / us-south-vault / us-south-cold / us-south-flex eu-standard / eu-vault / eu-cold / eu-flex

        -
        Location constraint - must be set to match the Region. Used when creating buckets only.
        -Choose a number from below, or type in your own value
        - 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
        -\ ""
        - 2 / US East (Ohio) Region.
        -\ "us-east-2"
        - …<omitted>…
        -location_constraint> us-standard
      2. -
    18. -
    19. Specify a canned ACL.

      + 1 / US Cross Region Endpoint + \ "s3-api.us-geo.objectstorage.softlayer.net" + 2 / US Cross Region Dallas Endpoint + \ "s3-api.dal.us-geo.objectstorage.softlayer.net" + 3 / US Cross Region Washington DC Endpoint + \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" + 4 / US Cross Region San Jose Endpoint + \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" + 5 / US Cross Region Private Endpoint + \ "s3-api.us-geo.objectstorage.service.networklayer.com" + 6 / US Cross Region Dallas Private Endpoint + \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" + 7 / US Cross Region Washington DC Private Endpoint + \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" + 8 / US Cross Region San Jose Private Endpoint + \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" + 9 / US Region East Endpoint + \ "s3.us-east.objectstorage.softlayer.net" +10 / US Region East Private Endpoint + \ "s3.us-east.objectstorage.service.networklayer.com" +11 / US Region South Endpoint +[snip] +34 / Toronto Single Site Private Endpoint + \ "s3.tor01.objectstorage.service.networklayer.com" +endpoint>1
    20. +
    21. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter

      +
       1 / US Cross Region Standard
      +   \ "us-standard"
      + 2 / US Cross Region Vault
      +   \ "us-vault"
      + 3 / US Cross Region Cold
      +   \ "us-cold"
      + 4 / US Cross Region Flex
      +   \ "us-flex"
      + 5 / US East Region Standard
      +   \ "us-east-standard"
      + 6 / US East Region Vault
      +   \ "us-east-vault"
      + 7 / US East Region Cold
      +   \ "us-east-cold"
      + 8 / US East Region Flex
      +   \ "us-east-flex"
      + 9 / US South Region Standard
      +   \ "us-south-standard"
      +10 / US South Region Vault
      +   \ "us-south-vault"
      +[snip]
      +32 / Toronto Flex
      +   \ "tor01-flex"
      +location_constraint>1
    22. +
    23. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.

      Canned ACL used when creating buckets and/or storing objects in S3.
       For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
       Choose a number from below, or type in your own value
      -1 / Owner gets FULL_CONTROL. No one else has access rights (default).
      -\ "private"
      -2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
      -\ "public-read"
      -/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
      - 3 | Granting this on a bucket is generally not recommended.
      -\ "public-read-write"
      - 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
      -\ "authenticated-read"
      -/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
      -5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
      -\ "bucket-owner-read"
      -/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
      - 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
      -\ "bucket-owner-full-control"
      +  1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
      +  \ "private"
      +  2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
      +  \ "public-read"
      +  3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
      +  \ "public-read-write"
      +  4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
      +  \ "authenticated-read"
       acl> 1
    24. -
    25. Set the SSE option to "None".

      -
      Choose a number from below, or type in your own value
      - 1 / None
      -\ ""
      -2 / AES256
      -\ "AES256"
      -server_side_encryption> 1
    26. -
    27. Set the storage class to "None" (IBM COS uses the LocationConstraint at the bucket level).

      -
      The storage class to use when storing objects in S3.
      -Choose a number from below, or type in your own value
      -1 / Default
      -\ ""
      - 2 / Standard storage class
      -\ "STANDARD"
      - 3 / Reduced redundancy storage class
      -\ "REDUCED_REDUNDANCY"
      - 4 / Standard Infrequent Access storage class
      - \ "STANDARD_IA"
      -storage_class>
    28. -
    29. Review the displayed configuration and accept to save the "remote" then quit.

      -
      Remote config
      ---------------------
      -[IBM-COS-XREGION]
      -env_auth = false
      -access_key_id = <>
      -secret_access_key = <>
      -region = other-v4-signature
      +
    30. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this

      +
      [xxx]
      +type = s3
      +Provider = IBMCOS
      +access_key_id = xxx
      +secret_access_key = yyy
       endpoint = s3-api.us-geo.objectstorage.softlayer.net
       location_constraint = us-standard
      -acl = private
      -server_side_encryption = 
      -storage_class =
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -Remote config
      -Current remotes:
      -
      -Name                 Type
      -====                 ====
      -IBM-COS-XREGION      s3
      -
      -e) Edit existing remote
      -n) New remote
      -d) Delete remote
      -r) Rename remote
      -c) Copy remote
      -s) Set configuration password
      -q) Quit config
      -e/n/d/r/c/s/q> q
    31. +acl = private
    32. Execute rclone commands

      1)  Create a bucket.
           rclone mkdir IBM-COS-XREGION:newbucket
      @@ -3205,6 +3437,8 @@ location_constraint>
       server_side_encryption>

      Which makes the config file look like this

      [minio]
      +type = s3
      +provider = Minio
       env_auth = false
       access_key_id = USWUXHGYZQYFYFFIT3RE
       secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
      @@ -3258,21 +3492,21 @@ Choose a number from below, or type in your own value
        1 / Empty for US Region, Northern Virginia or Pacific Northwest.
          \ ""
       [snip]
      -location_constraint> 
      +location_constraint>
       Canned ACL used when creating buckets and/or storing objects in S3.
       For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
       Choose a number from below, or type in your own value
        1 / Owner gets FULL_CONTROL. No one else has access rights (default).
          \ "private"
       [snip]
      -acl> 
      +acl>
       The server-side encryption algorithm used when storing this object in S3.
       Choose a number from below, or type in your own value
        1 / None
          \ ""
        2 / AES256
          \ "AES256"
      -server_side_encryption> 
      +server_side_encryption>
       The storage class to use when storing objects in S3.
       Choose a number from below, or type in your own value
        1 / Default
      @@ -3283,7 +3517,7 @@ Choose a number from below, or type in your own value
          \ "REDUCED_REDUNDANCY"
        4 / Standard Infrequent Access storage class
          \ "STANDARD_IA"
      -storage_class> 
      +storage_class>
       Remote config
       --------------------
       [wasabi]
      @@ -3292,10 +3526,10 @@ access_key_id = YOURACCESSKEY
       secret_access_key = YOURSECRETACCESSKEY
       region = us-east-1
       endpoint = s3.wasabisys.com
      -location_constraint = 
      -acl = 
      -server_side_encryption = 
      -storage_class = 
      +location_constraint =
      +acl =
      +server_side_encryption =
      +storage_class =
       --------------------
       y) Yes this is OK
       e) Edit this remote
      @@ -3303,15 +3537,17 @@ d) Delete this remote
       y/e/d> y

      This will leave the config file looking like this.

      [wasabi]
      +type = s3
      +provider = Wasabi
       env_auth = false
       access_key_id = YOURACCESSKEY
       secret_access_key = YOURSECRETACCESSKEY
      -region = us-east-1
      +region =
       endpoint = s3.wasabisys.com
      -location_constraint = 
      -acl = 
      -server_side_encryption = 
      -storage_class = 
      +location_constraint = +acl = +server_side_encryption = +storage_class =

      Backblaze B2

      B2 is Backblaze's cloud storage system.

      Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

      @@ -3792,7 +4028,7 @@ chunk_total_size = 10G

      Flag to clear all the cached data for this remote before.

      Default: not set

      --cache-chunk-size=SIZE

      -

      The size of a chunk (partial file data). Use lower numbers for slower connections.

      +

      The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.

      Default: 5M

      --cache-total-chunk-size=SIZE

      The total size that the chunks can take up on the local disk. If cache exceeds this value then it will start to the delete the oldest chunks until it goes under this value.

      @@ -4169,7 +4405,7 @@ y/e/d> y

      Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.

      Limitations

      Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      -

      There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.

      +

      There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail.

      If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

      FTP

      FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.

      @@ -4422,7 +4658,7 @@ y/e/d> y

      Service Account support

      You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

      To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

      -

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.

      +

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

      --fast-list

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

      Modified time

      @@ -4538,7 +4774,7 @@ y/e/d> y

      Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.

      Service Account support

      You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

      -

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based authentication flow.

      +

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

      Use case - Google Apps/G-suite account and individual Drive

      Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com.

      There's a few steps we need to go through to accomplish this:

      @@ -4626,6 +4862,8 @@ y/e/d> y

      By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false flag, or set the equivalent environment variable.

      Emptying trash

      If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

      +

      Quota information

      +

      To view your current quota you can use the rclone about remote: command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments.

      Specific options

      Here are the command line options specific to this cloud storage system.

      --drive-auth-owner-only

      @@ -4984,6 +5222,69 @@ y/e/d> y

      Limitations

      This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

      The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

      +

      Mega

      +

      Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.

      +

      This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

      +

      Paths are specified as remote:path

      +

      Paths may be as deep as required, eg remote:directory/subdirectory.

      +

      Here is an example of how to make a remote called remote. First run:

      +
       rclone config
      +

      This will guide you through an interactive setup process:

      +
      No remotes found - make a new one
      +n) New remote
      +s) Set configuration password
      +q) Quit config
      +n/s/q> n
      +name> remote
      +Type of storage to configure.
      +Choose a number from below, or type in your own value
      + 1 / Alias for a existing remote
      +   \ "alias"
      +[snip]
      +14 / Mega
      +   \ "mega"
      +[snip]
      +23 / http Connection
      +   \ "http"
      +Storage> mega
      +User name
      +user> you@example.com
      +Password.
      +y) Yes type in my own password
      +g) Generate random password
      +n) No leave this optional password blank
      +y/g/n> y
      +Enter the password:
      +password:
      +Confirm the password:
      +password:
      +Remote config
      +--------------------
      +[remote]
      +type = mega
      +user = you@example.com
      +pass = *** ENCRYPTED ***
      +--------------------
      +y) Yes this is OK
      +e) Edit this remote
      +d) Delete this remote
      +y/e/d> y
      +

      Once configured you can then use rclone like this,

      +

      List directories in top level of your Mega

      +
      rclone lsd remote:
      +

      List all the files in your Mega

      +
      rclone ls remote:
      +

      To copy a local directory to an Mega directory called backup

      +
      rclone copy /home/source remote:backup
      +

      Modified time and hashes

      +

      Mega does not support modification times or hashes yet.

      +

      Duplicated files

      +

      Mega can have two files with exactly the same name and path (unlike a normal file system).

      +

      Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

      +

      Use rclone dedupe to fix duplicated files.

      +

      Limitations

      +

      This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

      +

      Mega allows duplicate files which may confuse rclone.

      Microsoft Azure Blob Storage

      Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

      Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

      @@ -5074,7 +5375,7 @@ y/e/d> y

      Cutoff for switching to chunked upload - must be <= 256MB. The default is 256MB.

      --azureblob-chunk-size=SIZE

      Upload chunk size. Default 4MB. Note that this is stored in memory and there may be up to --transfers chunks stored at once in memory. This can be at most 100MB.

      -

      Limitations

      +

      Limitations

      MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

      Microsoft OneDrive

      Paths are specified as remote:path

      @@ -5166,16 +5467,17 @@ b) Business p) Personal b/p>

      After that rclone requires an authentication of your account. The application will first authenticate your account, then query the OneDrive resource URL and do a second (silent) authentication for this resource URL.

      -

      Modified time and hashes

      +

      Modified time and hashes

      OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

      -

      One drive supports SHA1 type hashes, so you can use --checksum flag.

      +

      OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.

      +

      For all types of OneDrive you can use the --checksum flag.

      Deleting files

      Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

      Specific options

      Here are the command line options specific to this cloud storage system.

      --onedrive-chunk-size=SIZE

      Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.

      -

      Limitations

      +

      Limitations

      Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

      The largest allowed file size is 10GiB (10,737,418,240 bytes).

      @@ -5491,6 +5793,9 @@ export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:

      --fast-list

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

      +

      --update and --use-server-modtime

      +

      As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

      +

      For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

      Specific options

      Here are the command line options specific to this cloud storage system.

      --swift-chunk-size=SIZE

      @@ -5498,7 +5803,7 @@ rclone lsd myremote:

      Modified time

      The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

      This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

      -

      Limitations

      +

      Limitations

      The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

      Troubleshooting

      Rclone gives Failed to create file system for "remote:": Bad Request

      @@ -5595,15 +5900,16 @@ y/e/d> y
      rclone ls remote:

      To copy a local directory to an pCloud directory called backup

      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      +

      Modified time and hashes

      pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

      pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum flag.

      Deleting files

      Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.

      SFTP

      SFTP is the Secure (or SSH) File Transfer Protocol.

      -

      It runs over SSH v2 and is standard with most modern SSH installations.

      +

      SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

      Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

      +

      Note that some SFTP servers will need the leading / - Synology is a good example of this.

      Here is an example of making an SFTP configuration. First run

      rclone config

      This will guide you through an interactive setup process.

      @@ -5708,8 +6014,9 @@ y/e/d> y

      Modified times are stored on the server to 1 second precision.

      Modified times are used in syncing and are fully supported.

      Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

      -

      Limitations

      -

      SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote check can be disabled by setting the configuration option disable_hashcheck. This may be required if you're connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited.

      +

      Limitations

      +

      SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.

      +

      Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

      The only ssh agent supported under Windows is Putty's pageant.

      The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).

      SFTP isn't supported under plan9 until this issue is fixed.

      @@ -5782,7 +6089,9 @@ Choose a number from below, or type in your own value \ "nextcloud" 2 / Owncloud \ "owncloud" - 3 / Other site/service or software + 3 / Sharepoint + \ "sharepoint" + 4 / Other site/service or software \ "other" vendor> 1 User name @@ -5815,7 +6124,7 @@ y/e/d> y
      rclone ls remote:

      To copy a local directory to an WebDAV directory called backup

      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      +

      Modified time and hashes

      Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.

      Hashes are not supported.

      Owncloud

      @@ -5835,6 +6144,23 @@ user = YourUserName pass = encryptedpassword

      If you are using put.io with rclone mount then use the --read-only flag to signal to the OS that it can't write to the mount.

      For more help see the put.io webdav docs.

      +

      Sharepoint

      +

      Can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975

      +

      This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav.

      +

      To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL:

      + +

      You'll only need this URL upto the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive.

      +

      Add the remote to rclone like this: Configure the url as https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents and use your normal account email and password for user and pass. If you have 2FA enabled, you have to generate an app password. Set the vendor to sharepoint.

      +

      Your config file should look like this:

      +
      [sharepoint]
      +type = webdav
      +url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
      +vendor = other
      +user = YourEmailAddress
      +pass = encryptedpassword

      Yandex Disk

      Yandex Disk is a cloud storage solution created by Yandex.

      Yandex paths may be as deep as required, eg remote:directory/subdirectory.

      @@ -5969,6 +6295,10 @@ nounc = true 6 two/three 6 b/two 6 b/one +

      --local-no-check-updated

      +

      Don't check to see if the files change during upload.

      +

      Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts can't copy - source file is being updated if the file changes during upload.

      +

      However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag.

      --local-no-unicode-normalization

      This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.

      --one-file-system, -x

      @@ -5996,6 +6326,102 @@ nounc = true

      This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.

      Changelog