From e4c380b2a86faba7699385fa264964a7721b1ee6 Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Sat, 28 Apr 2018 11:46:27 +0100 Subject: [PATCH] Version v1.41 --- MANUAL.html | 1134 ++++++++---- MANUAL.md | 1294 +++++++++---- MANUAL.txt | 1282 +++++++++---- docs/content/changelog.md | 94 +- docs/content/commands/rclone.md | 30 +- docs/content/commands/rclone_about.md | 214 +++ docs/content/commands/rclone_authorize.md | 26 +- docs/content/commands/rclone_cachestats.md | 26 +- docs/content/commands/rclone_cat.md | 26 +- docs/content/commands/rclone_check.md | 26 +- docs/content/commands/rclone_cleanup.md | 26 +- docs/content/commands/rclone_config.md | 26 +- docs/content/commands/rclone_config_create.md | 24 +- docs/content/commands/rclone_config_delete.md | 24 +- docs/content/commands/rclone_config_dump.md | 24 +- docs/content/commands/rclone_config_edit.md | 24 +- docs/content/commands/rclone_config_file.md | 24 +- .../commands/rclone_config_password.md | 24 +- .../commands/rclone_config_providers.md | 24 +- docs/content/commands/rclone_config_show.md | 24 +- docs/content/commands/rclone_config_update.md | 24 +- docs/content/commands/rclone_copy.md | 26 +- docs/content/commands/rclone_copyto.md | 26 +- docs/content/commands/rclone_cryptcheck.md | 26 +- docs/content/commands/rclone_cryptdecode.md | 26 +- docs/content/commands/rclone_dbhashsum.md | 26 +- docs/content/commands/rclone_dedupe.md | 27 +- docs/content/commands/rclone_delete.md | 26 +- .../commands/rclone_genautocomplete.md | 26 +- .../commands/rclone_genautocomplete_bash.md | 24 +- .../commands/rclone_genautocomplete_zsh.md | 24 +- docs/content/commands/rclone_gendocs.md | 26 +- docs/content/commands/rclone_hashsum.md | 187 ++ docs/content/commands/rclone_link.md | 180 ++ docs/content/commands/rclone_listremotes.md | 26 +- docs/content/commands/rclone_ls.md | 43 +- docs/content/commands/rclone_lsd.md | 60 +- docs/content/commands/rclone_lsf.md | 71 +- docs/content/commands/rclone_lsjson.md | 34 +- docs/content/commands/rclone_lsl.md | 43 +- docs/content/commands/rclone_md5sum.md | 26 +- docs/content/commands/rclone_mkdir.md | 26 +- docs/content/commands/rclone_mount.md | 56 +- docs/content/commands/rclone_move.md | 26 +- docs/content/commands/rclone_moveto.md | 26 +- docs/content/commands/rclone_ncdu.md | 26 +- docs/content/commands/rclone_obscure.md | 26 +- docs/content/commands/rclone_purge.md | 26 +- docs/content/commands/rclone_rc.md | 26 +- docs/content/commands/rclone_rcat.md | 26 +- docs/content/commands/rclone_rmdir.md | 26 +- docs/content/commands/rclone_rmdirs.md | 26 +- docs/content/commands/rclone_serve.md | 26 +- docs/content/commands/rclone_serve_http.md | 29 +- docs/content/commands/rclone_serve_restic.md | 30 +- docs/content/commands/rclone_serve_webdav.md | 29 +- docs/content/commands/rclone_sha1sum.md | 26 +- docs/content/commands/rclone_size.md | 27 +- docs/content/commands/rclone_sync.md | 26 +- docs/content/commands/rclone_touch.md | 26 +- docs/content/commands/rclone_tree.md | 26 +- docs/content/commands/rclone_version.md | 26 +- docs/layouts/partials/version.html | 2 +- fs/version.go | 2 +- rclone.1 | 1642 +++++++++++++---- 65 files changed, 5731 insertions(+), 1875 deletions(-) create mode 100644 docs/content/commands/rclone_about.md create mode 100644 docs/content/commands/rclone_hashsum.md create mode 100644 docs/content/commands/rclone_link.md diff --git a/MANUAL.html b/MANUAL.html index 19d38aa09..826801f3c 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@

Rclone

Logo

@@ -33,6 +33,7 @@
  • Hubic
  • IBM COS S3
  • Memset Memstore
  • +
  • Mega
  • Microsoft Azure Blob Storage
  • Microsoft OneDrive
  • Minio
  • @@ -40,7 +41,7 @@
  • OVH
  • Openstack Swift
  • Oracle Cloud Storage
  • -
  • Ownloud
  • +
  • ownCloud
  • pCloud
  • put.io
  • QingStor
  • @@ -151,6 +152,7 @@ sudo mv rclone /usr/local/bin/
  • Google Drive
  • HTTP
  • Hubic
  • +
  • Mega
  • Microsoft Azure Blob Storage
  • Microsoft OneDrive
  • Openstack Swift / Rackspace Cloudfiles / Memset Memstore
  • @@ -272,6 +274,12 @@ rclone --dry-run --min-size 100M delete remote:path

    List the objects in the path with size and path.

    Synopsis

    Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.

    +

    Eg

    +
    $ rclone ls swift:bucket
    +    60295 bevajer5jef
    +    90613 canole
    +    94467 diwogej7
    +    37600 fubuwic

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone ls remote:path [flags]

    Options

      -h, --help   help for ls

    rclone lsd

    List all directories/containers/buckets in the path.

    Synopsis

    -

    Lists the directories in the source path to standard output. Recurses by default.

    +

    Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse.

    +

    This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg

    +
    $ rclone lsd swift:
    +      494000 2018-04-26 08:43:20     10000 10000files
    +          65 2018-04-26 08:43:20         1 1File
    +

    Or

    +
    $ rclone lsd drive:test
    +          -1 2016-10-17 17:41:53        -1 1000files
    +          -1 2017-01-03 14:40:54        -1 2500files
    +          -1 2017-07-08 14:39:28        -1 4000files
    +

    If you just want the directory names use "rclone lsf --dirs-only".

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsd remote:path [flags]

    Options

    -
      -h, --help   help for lsd
    +
      -h, --help        help for lsd
    +  -R, --recursive   Recurse into the listing.

    rclone lsl

    List the objects in path with modification time, size and path.

    Synopsis

    Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default.

    +

    Eg

    +
    $ rclone lsl swift:bucket
    +    60295 2016-06-25 18:55:41.062626927 bevajer5jef
    +    90613 2016-06-25 18:55:43.302607074 canole
    +    94467 2016-06-25 18:55:43.046609333 diwogej7
    +    37600 2016-06-25 18:55:40.814629136 fubuwic

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsl remote:path [flags]

    Options

      -h, --help   help for lsl
    @@ -345,7 +373,8 @@ rclone --dry-run --min-size 100M delete remote:path

    Prints the total size and number of objects in remote:path.

    rclone size remote:path [flags]

    Options

    -
      -h, --help   help for size
    +
      -h, --help   help for size
    +      --json   format output as JSON

    rclone version

    Show the version number.

    Synopsis

    @@ -415,6 +444,7 @@ two-3.txt: renamed from: two.txt
  • --dedupe-mode first - removes identical files then keeps the first one.
  • --dedupe-mode newest - removes identical files then keeps the newest one.
  • --dedupe-mode oldest - removes identical files then keeps the oldest one.
  • +
  • --dedupe-mode largest - removes identical files then keeps the largest one.
  • --dedupe-mode rename - removes identical files then renames the rest to be different.
  • For example to rename all the identically named photos in your Google Photos directory, do

    @@ -425,23 +455,62 @@ two-3.txt: renamed from: two.txt

    Options

          --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
       -h, --help                 help for dedupe
    +

    rclone about

    +

    Get quota information from the remote.

    +

    Synopsis

    +

    Get quota information from the remote, like bytes used/free/quota and bytes used in the trash. Not supported by all remotes.

    +

    This will print to stdout something like this:

    +
    Total:   17G
    +Used:    7.444G
    +Free:    1.315G
    +Trashed: 100.000M
    +Other:   8.241G
    +

    Where the fields are:

    + +

    Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted.

    +

    Use the --full flag to see the numbers written out in full, eg

    +
    Total:   18253611008
    +Used:    7993453766
    +Free:    1411001220
    +Trashed: 104857602
    +Other:   8849156022
    +

    Use the --json flag for a computer readable output, eg

    +
    {
    +    "total": 18253611008,
    +    "used": 7993453766,
    +    "trashed": 104857602,
    +    "other": 8849156022,
    +    "free": 1411001220
    +}
    +
    rclone about remote: [flags]
    +

    Options

    +
          --full   Full numbers instead of SI units
    +  -h, --help   help for about
    +      --json   Format output as JSON

    rclone authorize

    Remote authorization.

    -

    Synopsis

    +

    Synopsis

    Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

    rclone authorize [flags]
    -

    Options

    +

    Options

      -h, --help   help for authorize

    rclone cachestats

    Print cache stats for a remote

    -

    Synopsis

    +

    Synopsis

    Print cache stats for a remote in JSON format

    rclone cachestats source: [flags]
    -

    Options

    +

    Options

      -h, --help   help for cachestats

    rclone cat

    Concatenates any files and sends them to stdout.

    -

    Synopsis

    +

    Synopsis

    rclone cat sends any files to standard output.

    You can use it like this to output a single file

    rclone cat remote:path/to/file
    @@ -451,7 +520,7 @@ two-3.txt: renamed from: two.txt
    rclone --include "*.txt" cat remote:path/to/dir

    Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.

    rclone cat remote:path [flags]
    -

    Options

    +

    Options

          --count int    Only print N characters. (default -1)
           --discard      Discard the output instead of printing.
           --head int     Only print the first N characters.
    @@ -460,76 +529,76 @@ two-3.txt: renamed from: two.txt
    --tail int Only print the last N characters.

    rclone config create

    Create a new remote with name, type and options.

    -

    Synopsis

    +

    Synopsis

    Create a new remote of with and options. The options should be passed in in pairs of .

    For example to make a swift remote of name myremote using auto config you would do:

    rclone config create myremote swift env_auth true
    rclone config create <name> <type> [<key> <value>]* [flags]
    -

    Options

    +

    Options

      -h, --help   help for create

    rclone config delete

    Delete an existing remote .

    -

    Synopsis

    +

    Synopsis

    Delete an existing remote .

    rclone config delete <name> [flags]
    -

    Options

    +

    Options

      -h, --help   help for delete

    rclone config dump

    Dump the config file as JSON.

    -

    Synopsis

    +

    Synopsis

    Dump the config file as JSON.

    rclone config dump [flags]
    -

    Options

    +

    Options

      -h, --help   help for dump

    rclone config edit

    Enter an interactive configuration session.

    -

    Synopsis

    +

    Synopsis

    Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

    rclone config edit [flags]
    -

    Options

    +

    Options

      -h, --help   help for edit

    rclone config file

    Show path of configuration file in use.

    -

    Synopsis

    +

    Synopsis

    Show path of configuration file in use.

    rclone config file [flags]
    -

    Options

    +

    Options

      -h, --help   help for file

    rclone config password

    Update password in an existing remote.

    -

    Synopsis

    +

    Synopsis

    Update an existing remote's password. The password should be passed in in pairs of .

    For example to set password of a remote of name myremote you would do:

    rclone config password myremote fieldname mypassword
    rclone config password <name> [<key> <value>]+ [flags]
    -

    Options

    +

    Options

      -h, --help   help for password

    rclone config providers

    List in JSON format all the providers and options.

    -

    Synopsis

    +

    Synopsis

    List in JSON format all the providers and options.

    rclone config providers [flags]
    -

    Options

    +

    Options

      -h, --help   help for providers

    rclone config show

    Print (decrypted) config file, or the config for a single remote.

    -

    Synopsis

    +

    Synopsis

    Print (decrypted) config file, or the config for a single remote.

    rclone config show [<remote>] [flags]
    -

    Options

    +

    Options

      -h, --help   help for show

    rclone config update

    Update options in an existing remote.

    -

    Synopsis

    +

    Synopsis

    Update an existing remote's options. The options should be passed in in pairs of .

    For example to update the env_auth field of a remote of name myremote you would do:

    rclone config update myremote swift env_auth true
    rclone config update <name> [<key> <value>]+ [flags]
    -

    Options

    +

    Options

      -h, --help   help for update

    rclone copyto

    Copy files from source to dest, skipping already copied

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it copies it to a file or directory named dest:path.

    This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.

    So

    @@ -543,11 +612,11 @@ if src is directory see copy command for full details

    This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.

    rclone copyto source:path dest:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for copyto

    rclone cryptcheck

    Cryptcheck checks the integrity of a crypted remote.

    -

    Synopsis

    +

    Synopsis

    rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.

    For it to work the underlying remote of the cryptedremote must support some kind of checksum.

    It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.

    @@ -557,11 +626,11 @@ if src is directory
    rclone cryptcheck remote:path encryptedremote:path

    After it has run it will log the status of the encryptedremote:.

    rclone cryptcheck remote:path cryptedremote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for cryptcheck

    rclone cryptdecode

    Cryptdecode returns unencrypted file names.

    -

    Synopsis

    +

    Synopsis

    rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

    If you supply the --reverse flag, it will return encrypted file names.

    use it like this

    @@ -569,25 +638,25 @@ if src is directory rclone cryptdecode --reverse encryptedremote: filename1 filename2
    rclone cryptdecode encryptedremote: encryptedfilename [flags]
    -

    Options

    +

    Options

      -h, --help      help for cryptdecode
           --reverse   Reverse cryptdecode, encrypts filenames

    rclone dbhashsum

    Produces a Dropbox hash file for all the objects in the path.

    -

    Synopsis

    +

    Synopsis

    Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.

    rclone dbhashsum remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for dbhashsum

    rclone genautocomplete

    Output completion script for a given shell.

    -

    Synopsis

    +

    Synopsis

    Generates a shell completion script for rclone. Run with --help to list the supported shells.

    -

    Options

    +

    Options

      -h, --help   help for genautocomplete

    rclone genautocomplete bash

    Output bash completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a bash shell autocompletion script for rclone.

    This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg

    sudo rclone genautocomplete bash
    @@ -595,11 +664,11 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
    . /etc/bash_completion

    If you supply a command line argument the script will be written there.

    rclone genautocomplete bash [output_file] [flags]
    -

    Options

    +

    Options

      -h, --help   help for bash

    rclone genautocomplete zsh

    Output zsh completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a zsh autocompletion script for rclone.

    This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg

    sudo rclone genautocomplete zsh
    @@ -607,39 +676,93 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
    autoload -U compinit && compinit

    If you supply a command line argument the script will be written there.

    rclone genautocomplete zsh [output_file] [flags]
    -

    Options

    +

    Options

      -h, --help   help for zsh

    rclone gendocs

    Output markdown docs for rclone to the directory supplied.

    -

    Synopsis

    +

    Synopsis

    This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

    rclone gendocs output_directory [flags]
    -

    Options

    +

    Options

      -h, --help   help for gendocs
    +

    rclone hashsum

    +

    Produces an hashsum file for all the objects in the path.

    +

    Synopsis

    +

    Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

    +

    Run without a hash to see the list of supported hashes, eg

    +
    $ rclone hashsum
    +Supported hashes are:
    +  * MD5
    +  * SHA-1
    +  * DropboxHash
    +  * QuickXorHash
    +

    Then

    +
    $ rclone hashsum MD5 remote:path
    +
    rclone hashsum <hash> remote:path [flags]
    +

    Options

    +
      -h, --help   help for hashsum
    + +

    Generate public link to file/folder.

    +

    Synopsis

    +

    rclone link will create or retrieve a public link to the given file or folder.

    +
    rclone link remote:path/to/file
    +rclone link remote:path/to/folder/
    +

    If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.

    +
    rclone link remote:path [flags]
    +

    Options

    +
      -h, --help   help for link

    rclone listremotes

    List all the remotes in the config file.

    -

    Synopsis

    +

    Synopsis

    rclone listremotes lists all the available remotes from the config file.

    When uses with the -l flag it lists the types too.

    rclone listremotes [flags]
    -

    Options

    +

    Options

      -h, --help   help for listremotes
       -l, --long   Show the type as well as names.

    rclone lsf

    List directories and objects in remote:path formatted for parsing

    -

    Synopsis

    +

    Synopsis

    List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.

    +

    Eg

    +
    $ rclone lsf swift:bucket
    +bevajer5jef
    +canole
    +diwogej7
    +ferejej3gux/
    +fubuwic

    Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

    p - path
     s - size
     t - modification time
     h - hash

    So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

    +

    Eg

    +
    $ rclone lsf  --format "tsp" swift:bucket
    +2016-06-25 18:55:41;60295;bevajer5jef
    +2016-06-25 18:55:43;90613;canole
    +2016-06-25 18:55:43;94467;diwogej7
    +2018-04-26 08:50:45;0;ferejej3gux/
    +2016-06-25 18:55:40;37600;fubuwic

    If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.

    For example to emulate the md5sum command you can use

    rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .
    +

    Eg

    +
    $ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket 
    +7908e352297f0f530b84a756f188baa3  bevajer5jef
    +cd65ac234e6fea5925974a51cdd865cc  canole
    +03b5341b4f234b9d984d03ad076bae91  diwogej7
    +8fd37c3810dd660778137ac3a66cc06d  fubuwic
    +99713e14a4c4ff553acaf1930fad985b  gixacuh7ku

    (Though "rclone md5sum ." is an easier way of typing this.)

    By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

    +

    Eg

    +
    $ rclone lsf  --separator "," --format "tshp" swift:bucket
    +2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
    +2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
    +2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
    +2018-04-26 08:52:53,0,,ferejej3gux/
    +2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsf remote:path [flags]
    -

    Options

    +

    Options

      -d, --dir-slash          Append a slash to directory names. (default true)
           --dirs-only          Only list directories.
           --files-only         Only list files.
    @@ -664,7 +788,7 @@ h - hash
    -s, --separator string Separator for the items in the format. (default ";")

    rclone lsjson

    List directories and objects in the path in JSON format.

    -

    Synopsis

    +

    Synopsis

    List directories and objects in the path in JSON format.

    The output is an array of Items, where each Item looks like this

    { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }

    @@ -684,10 +808,11 @@ h - hash
  • lsjson to list objects and directories in JSON format
  • ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsjson remote:path [flags]
    -

    Options

    +

    Options

      -M, --encrypted    Show the encrypted names.
           --hash         Include hashes in the output (may take longer).
       -h, --help         help for lsjson
    @@ -695,7 +820,7 @@ h - hash
    -R, --recursive Recurse into the listing.

    rclone mount

    Mount the remote as a mountpoint. EXPERIMENTAL

    -

    Synopsis

    +

    Synopsis

    rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

    This is EXPERIMENTAL - use with care.

    First set up your remote using rclone config. Check it works with rclone ls etc.

    @@ -723,8 +848,11 @@ umount /path/to/local/mount

    File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the EXPERIMENTAL file caching for solutions to make mount mount more reliable.

    Attribute caching

    You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.

    -

    The default is 0s - no caching - which is recommended for filesystems which can change outside the control of the kernel.

    -

    If you set it higher ('1s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there may be strange effects when files change on the remote.

    +

    The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel.

    +

    In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories.

    +

    The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.

    +

    If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.

    +

    If files don't change on the remote outside of the control of rclone then there is no chance of corruption.

    This is the same as setting the attr_timeout option in mount.fuse.

    Filters

    Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

    @@ -782,11 +910,11 @@ umount /path/to/local/mount

    This mode should support all normal file system operations.

    If an upload or download fails it will be retried up to --low-level-retries times.

    rclone mount remote:path /path/to/mountpoint [flags]
    -

    Options

    +

    Options

          --allow-non-empty                    Allow mounting over a non-empty directory.
           --allow-other                        Allow access to other users.
           --allow-root                         Allow access to root user.
    -      --attr-timeout duration              Time for which file/directory attributes are cached.
    +      --attr-timeout duration              Time for which file/directory attributes are cached. (default 1s)
           --daemon                             Run mount as a daemon (background mode).
           --debug-fuse                         Debug the FUSE internals - needs -v.
           --default-permissions                Makes kernel enforce access control based on the file mode.
    @@ -809,7 +937,7 @@ umount /path/to/local/mount
    --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

    rclone moveto

    Move file or directory from source to dest.

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it moves it to a file or directory named dest:path.

    This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.

    So

    @@ -824,11 +952,11 @@ if src is directory

    This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

    Important: Since this can cause data loss, test first with the --dry-run flag.

    rclone moveto source:path dest:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for moveto

    rclone ncdu

    Explore a remote with a text based user interface.

    -

    Synopsis

    +

    Synopsis

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

    To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

    @@ -839,34 +967,35 @@ if src is directory c toggle counts g toggle graph n,s,C sort by name,size,count + ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit

    This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands.

    rclone ncdu remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for ncdu

    rclone obscure

    Obscure password for use in the rclone.conf

    -

    Synopsis

    +

    Synopsis

    Obscure password for use in the rclone.conf

    rclone obscure password [flags]
    -

    Options

    +

    Options

      -h, --help   help for obscure

    rclone rc

    Run a command against a running rclone.

    -

    Synopsis

    +

    Synopsis

    This runs a command against a running rclone. By default it will use that specified in the --rc-addr command.

    Arguments should be passed in as parameter=value.

    The result will be returned as a JSON object by default.

    Use "rclone rc list" to see a list of all possible commands.

    rclone rc commands parameter [flags]
    -

    Options

    +

    Options

      -h, --help         help for rc
           --no-output    If set don't output the JSON result.
           --url string   URL to connect to rclone remote control. (default "http://localhost:5572/")

    rclone rcat

    Copies standard input to file on remote.

    -

    Synopsis

    +

    Synopsis

    rclone rcat reads from standard input (stdin) and copies it to a single remote file.

    echo "hello world" | rclone rcat remote:path/to/file
     ffmpeg - | rclone rcat --checksum remote:path/to/file
    @@ -874,37 +1003,37 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file

    rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.

    Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination.

    rclone rcat remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for rcat

    rclone rmdirs

    Remove empty directories under the path.

    -

    Synopsis

    +

    Synopsis

    This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.

    If you supply the --leave-root flag, it will not remove the root directory.

    This is useful for tidying up remotes that rclone has left a lot of empty directories in.

    rclone rmdirs remote:path [flags]
    -

    Options

    +

    Options

      -h, --help         help for rmdirs
           --leave-root   Do not remove root directory if empty

    rclone serve

    Serve a remote over a protocol.

    -

    Synopsis

    +

    Synopsis

    rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg

    rclone serve http remote:

    Each subcommand has its own options which you can see in their help.

    rclone serve <protocol> [opts] <remote> [flags]
    -

    Options

    +

    Options

      -h, --help   help for serve

    rclone serve http

    Serve the remote over HTTP.

    -

    Synopsis

    +

    Synopsis

    rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

    You can use the filter flags (eg --include, --exclude) to control what is served.

    The server will log errors. Use -v to see access logs.

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.

    +

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    @@ -972,7 +1101,7 @@ htpasswd -B htpasswd anotherUser

    This mode should support all normal file system operations.

    If an upload or download fails it will be retried up to --low-level-retries times.

    rclone serve http remote:path [flags]
    -

    Options

    +

    Options

          --addr string                        IPaddress:Port or :Port to bind server to. (default "localhost:8080")
           --cert string                        SSL PEM key (concatenation of certificate and CA certificate)
           --client-ca string                   Client certificate authority to verify clients with
    @@ -999,7 +1128,7 @@ htpasswd -B htpasswd anotherUser
    --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)

    rclone serve restic

    Serve the remote for restic's REST API.

    -

    Synopsis

    +

    Synopsis

    rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    Restic is a command line program for doing backups.

    The server will log errors. Use -v to see access logs.

    @@ -1038,8 +1167,8 @@ snapshot 45c8fdd8 saved $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.

    +

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    @@ -1056,8 +1185,9 @@ htpasswd -B htpasswd anotherUser

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    rclone serve restic remote:path [flags]
    -

    Options

    +

    Options

          --addr string                     IPaddress:Port or :Port to bind server to. (default "localhost:8080")
    +      --append-only                     disallow deletion of repository data
           --cert string                     SSL PEM key (concatenation of certificate and CA certificate)
           --client-ca string                Client certificate authority to verify clients with
       -h, --help                            help for restic
    @@ -1072,12 +1202,12 @@ htpasswd -B htpasswd anotherUser
    --user string User name for authentication.

    rclone serve webdav

    Serve remote:path over webdav.

    -

    Synopsis

    +

    Synopsis

    rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.

    NB at the moment each directory listing reads the start of each file which is undesirable: see https://github.com/golang/go/issues/22577

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.

    +

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    @@ -1145,7 +1275,7 @@ htpasswd -B htpasswd anotherUser

    This mode should support all normal file system operations.

    If an upload or download fails it will be retried up to --low-level-retries times.

    rclone serve webdav remote:path [flags]
    -

    Options

    +

    Options

          --addr string                        IPaddress:Port or :Port to bind server to. (default "localhost:8080")
           --cert string                        SSL PEM key (concatenation of certificate and CA certificate)
           --client-ca string                   Client certificate authority to verify clients with
    @@ -1172,16 +1302,16 @@ htpasswd -B htpasswd anotherUser
    --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)

    rclone touch

    Create new file or change file modification time.

    -

    Synopsis

    +

    Synopsis

    Create new file or change file modification time.

    rclone touch remote:path [flags]
    -

    Options

    +

    Options

      -h, --help               help for touch
       -C, --no-create          Do not create the file if it does not exist.
       -t, --timestamp string   Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)

    rclone tree

    List the contents of the remote in a tree like fashion.

    -

    Synopsis

    +

    Synopsis

    rclone tree lists the contents of a remote in a similar way to the unix tree command.

    For example

    $ rclone tree remote:path
    @@ -1197,7 +1327,7 @@ htpasswd -B htpasswd anotherUser

    You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.

    The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.

    rclone tree remote:path [flags]
    -

    Options

    +

    Options

      -a, --all             All files are listed (list . files too).
       -C, --color           Turn colorization on always.
       -d, --dirs-only       List directories only.
    @@ -1261,10 +1391,10 @@ htpasswd -B htpasswd anotherUser

    This can be used when scripting to make aged backups efficiently, eg

    rclone sync remote:current-backup remote:previous-backup
     rclone sync /path/to/files remote:current-backup
    -

    Options

    +

    Options

    Rclone has a number of options to control its behaviour.

    Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

    -

    Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

    +

    Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

    --backup-dir=DIR

    When using sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.

    If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.

    @@ -1343,6 +1473,7 @@ rclone sync /path/to/files remote:current-backup

    During rmdirs it will not remove root directory, even if it's empty.

    --log-file=FILE

    Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

    +

    Note that if you are using the logrotate program to manage rclone's logs, then you should use the copytruncate option as rclone doesn't have a signal to rotate logs.

    --log-level LEVEL

    This sets the log level for rclone. The default log level is NOTICE.

    DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.

    @@ -1452,6 +1583,9 @@ rclone sync /path/to/files remote:current-backup

    If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.

    On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.

    This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.

    +

    --use-server-modtime

    +

    Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.

    +

    Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.

    -v, -vv, --verbose

    With -v rclone will tell you about each file that is transferred and a small number of significant events.

    With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.

    @@ -1519,6 +1653,10 @@ export RCLONE_CONFIG_PASS

    Dump HTTP headers - will contain sensitive info such as Authorization: headers - use --dump headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only.

    --dump filters

    Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.

    +

    --dump goroutines

    +

    This dumps a list of the running go-routines at the end of the command to standard output.

    +

    --dump openfiles

    +

    This dumps a list of the open files at the end of the command. It uses the lsof command to do that so you'll need that installed to use it.

    --memprofile=FILE

    Write memory profile to file. This can be analysed with go tool pprof.

    --no-check-certificate=true/false

    @@ -1578,7 +1716,7 @@ export RCLONE_CONFIG_PASS

    Environment Variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    -

    Options

    +

    Options

    Every option in rclone can have its default set by environment variable.

    To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

    For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

    @@ -1925,29 +2063,44 @@ dir1/dir2/dir3/.ignore }

    Run rclone rc on its own to see the help for the installed remote control commands.

    Supported commands

    + +

    cache/expire: Purge a remote from cache

    +

    Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional)

    +

    Eg

    +
    rclone rc cache/expire remote=path/to/sub/folder/
    +rclone rc cache/expire remote=/ withData=true
    +

    cache/stats: Get cache stats

    +

    Show statistics for the cache remote.

    core/bwlimit: Set the bandwidth limit.

    This sets the bandwidth limit to that passed in.

    Eg

    -
    rclone core/bwlimit rate=1M
    -rclone core/bwlimit rate=off
    -

    cache/expire: Purge a remote from cache

    -

    Purge a remote from the cache backend. Supports either a directory or a file. Params:

    +
    rclone rc core/bwlimit rate=1M
    +rclone rc core/bwlimit rate=off
    +

    The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.

    +

    core/memstats: Returns the memory statistics

    +

    This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats

    +

    The most interesting values for most people are:

    +

    core/pid: Return PID of current process

    +

    This returns PID of current process. Useful for stopping rclone process.

    +

    rc/error: This returns an error

    +

    This returns an error with the input as part of its error string. Useful for testing error handling.

    +

    rc/list: List all the registered remote control commands

    +

    This lists all the registered remote control commands as a JSON map in the commands response.

    +

    rc/noop: Echo the input to the output parameters

    +

    This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

    vfs/forget: Forget files or directories in the directory cache.

    This forgets the paths in the directory cache causing them to be re-read from the remote when needed.

    If no paths are passed in then it will forget all the paths in the directory cache.

    rclone rc vfs/forget

    Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg

    rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
    -

    rc/noop: Echo the input to the output parameters

    -

    This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

    -

    rc/error: This returns an error

    -

    This returns an error with the input as part of its error string. Useful for testing error handling.

    -

    rc/list: List all the registered remote control commands

    -

    This lists all the registered remote control commands as a JSON map in the commands response.

    +

    Accessing the remote control via HTTP

    Rclone implements a simple HTTP based protocol.

    Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.

    @@ -2103,6 +2256,14 @@ $ echo $? R/W +Mega +- +No +No +Yes +- + + Microsoft Azure Blob Storage MD5 Yes @@ -2110,15 +2271,15 @@ $ echo $? No R/W - + Microsoft OneDrive -SHA1 +SHA1 ‡‡ Yes Yes No R - + Openstack Swift MD5 Yes @@ -2126,7 +2287,7 @@ $ echo $? No R/W - + pCloud MD5, SHA1 Yes @@ -2134,7 +2295,7 @@ $ echo $? No W - + QingStor MD5 No @@ -2142,7 +2303,7 @@ $ echo $? No R/W - + SFTP MD5, SHA1 ‡ Yes @@ -2150,7 +2311,7 @@ $ echo $? No - - + WebDAV - Yes †† @@ -2158,7 +2319,7 @@ $ echo $? No - - + Yandex Disk MD5 Yes @@ -2166,7 +2327,7 @@ $ echo $? No R/W - + The local filesystem All Yes @@ -2182,6 +2343,7 @@ $ echo $?

    † Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.

    ‡ SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

    †† WebDAV supports modtimes when used with Owncloud and Nextcloud only.

    +

    ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash.

    ModTime

    The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

    All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.

    @@ -2216,6 +2378,8 @@ $ echo $? CleanUp ListR StreamUpload +LinkSharing +About @@ -2228,6 +2392,8 @@ $ echo $? No #575 No No +No #2178 +No Amazon S3 @@ -2238,6 +2404,8 @@ $ echo $? No Yes Yes +No #2178 +No Backblaze B2 @@ -2248,6 +2416,8 @@ $ echo $? Yes Yes Yes +No #2178 +No Box @@ -2258,6 +2428,8 @@ $ echo $? No #575 No Yes +No #2178 +No Dropbox @@ -2268,6 +2440,8 @@ $ echo $? No #575 No Yes +Yes +Yes FTP @@ -2278,6 +2452,8 @@ $ echo $? No No Yes +No #2178 +No Google Cloud Storage @@ -2288,6 +2464,8 @@ $ echo $? No Yes Yes +No #2178 +No Google Drive @@ -2298,6 +2476,8 @@ $ echo $? Yes No Yes +Yes +Yes HTTP @@ -2308,6 +2488,8 @@ $ echo $? No No No +No #2178 +No Hubic @@ -2318,8 +2500,22 @@ $ echo $? No Yes Yes +No #2178 +Yes +Mega +Yes +No +Yes +Yes +No +No +No +No #2178 +Yes + + Microsoft Azure Blob Storage Yes Yes @@ -2328,8 +2524,10 @@ $ echo $? No Yes No +No #2178 +No - + Microsoft OneDrive Yes Yes @@ -2338,8 +2536,10 @@ $ echo $? No #575 No No +No #2178 +Yes - + Openstack Swift Yes † Yes @@ -2348,8 +2548,10 @@ $ echo $? No Yes Yes +No #2178 +Yes - + pCloud Yes Yes @@ -2358,8 +2560,10 @@ $ echo $? Yes No No +No #2178 +Yes - + QingStor No Yes @@ -2368,8 +2572,10 @@ $ echo $? No Yes No +No #2178 +No - + SFTP No No @@ -2378,8 +2584,10 @@ $ echo $? No No Yes +No #2178 +No - + WebDAV Yes Yes @@ -2388,8 +2596,10 @@ $ echo $? No No Yes ‡ +No #2178 +No - + Yandex Disk Yes No @@ -2398,8 +2608,10 @@ $ echo $? Yes Yes Yes +No #2178 +No - + The local filesystem Yes No @@ -2408,6 +2620,8 @@ $ echo $? No No Yes +No +Yes @@ -2430,6 +2644,11 @@ $ echo $?

    The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details.

    StreamUpload

    Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat.

    +

    LinkSharing

    +

    Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.

    +

    About

    +

    This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash.

    +

    If the server can't do About then rclone about will return an error.

    Alias

    The alias remote provides a new name for another remote.

    Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory.

    @@ -2639,8 +2858,28 @@ y/e/d> y

    Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

    At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.

    Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

    -

    Amazon S3

    +

    Amazon S3 Storage Providers

    +

    The S3 backend can be used with a number of different providers:

    +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    +

    Once you have made a remote (see the provider specific section above) you can use it like this:

    +

    See all buckets

    +
    rclone lsd remote:
    +

    Make a new bucket

    +
    rclone mkdir remote:bucket
    +

    List the contents of a bucket

    +
    rclone ls remote:bucket
    +

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    +
    rclone sync /home/local/directory remote:bucket
    +

    AWS S3

    Here is an example of making an s3 configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -2656,7 +2895,7 @@ Choose a number from below, or type in your own value \ "alias" 2 / Amazon Drive \ "amazon cloud drive" - 3 / Amazon S3 (also Dreamhost, Ceph, Minio) + 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) \ "s3" 4 / Backblaze B2 \ "b2" @@ -2664,6 +2903,25 @@ Choose a number from below, or type in your own value 23 / http Connection \ "http" Storage> s3 +Choose your S3 provider. +Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + 2 / Ceph Object Storage + \ "Ceph" + 3 / Digital Ocean Spaces + \ "DigitalOcean" + 4 / Dreamhost DreamObjects + \ "Dreamhost" + 5 / IBM COS S3 + \ "IBMCOS" + 6 / Minio Object Storage + \ "Minio" + 7 / Wasabi Object Storage + \ "Wasabi" + 8 / Any other S3 compatible provider + \ "Other" +provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step @@ -2675,7 +2933,7 @@ AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY -Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. +Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. @@ -2720,13 +2978,9 @@ Choose a number from below, or type in your own value / South America (Sao Paulo) Region 14 | Needs location constraint sa-east-1. \ "sa-east-1" - / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. -15 | Set this and make sure you set the endpoint. - \ "other-v2-signature" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. -Specify if using an S3 clone such as Ceph. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value @@ -2795,10 +3049,14 @@ Choose a number from below, or type in your own value \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" + 5 / One Zone Infrequent Access storage class + \ "ONEZONE_IA" storage_class> 1 Remote config -------------------- [remote] +type = s3 +provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY @@ -2812,18 +3070,12 @@ storage_class = y) Yes this is OK e) Edit this remote d) Delete this remote -y/e/d> y -

    This remote is called remote and can now be used like this

    -

    See all buckets

    -
    rclone lsd remote:
    -

    Make a new bucket

    -
    rclone mkdir remote:bucket
    -

    List the contents of a bucket

    -
    rclone ls remote:bucket
    -

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    -
    rclone sync /home/local/directory remote:bucket
    +y/e/d>

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    --update and --use-server-modtime

    +

    As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

    +

    For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

    Modified time

    The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    Multipart uploads

    @@ -2831,20 +3083,30 @@ y/e/d> y

    Buckets and Regions

    With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

    Authentication

    -

    There are two ways to supply rclone with a set of AWS credentials. In order of precedence:

    +

    There are a number of ways to supply rclone with a set of AWS credentials, with and without using the environment.

    +

    The different authentication methods are tried in this order:

    If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).

    S3 Permissions

    @@ -2903,41 +3165,27 @@ y/e/d> y +

    --s3-chunk-size=SIZE

    +

    Any files larger than this will be uploaded in chunks of this size. The default is 5MB. The minimum is 5MB.

    +

    Note that 2 chunks of this size are buffered in memory per transfer.

    +

    If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.

    Anonymous access to public buckets

    -

    If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Eg

    -
    No remotes found - make a new one
    -n) New remote
    -q) Quit config
    -n/q> n
    -name> anons3
    -What type of source is it?
    -Choose a number from below
    - 1) amazon cloud drive
    - 2) b2
    - 3) drive
    - 4) dropbox
    - 5) google cloud storage
    - 6) swift
    - 7) hubic
    - 8) local
    - 9) onedrive
    -10) s3
    -11) yandex
    -type> 10
    -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
    -Choose a number from below, or type in your own value
    - * Enter AWS credentials in the next step
    - 1) false
    - * Get AWS credentials from the environment (env vars or IAM)
    - 2) true
    -env_auth> 1
    -AWS Access Key ID - leave blank for anonymous access or runtime credentials.
    -access_key_id>
    -AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    -secret_access_key>
    -...
    +

    If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Your config should end up looking like this:

    +
    [anons3]
    +type = s3
    +provider = AWS
    +env_auth = false
    +access_key_id = 
    +secret_access_key = 
    +region = us-east-1
    +endpoint = 
    +location_constraint = 
    +acl = private
    +server_side_encryption = 
    +storage_class = 

    Then use it as normal with the name of the public bucket, eg

    rclone lsd anons3:1000genomes

    You will be able to list and copy data but not upload it.

    @@ -2946,15 +3194,16 @@ secret_access_key>

    To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

    [ceph]
     type = s3
    +provider = Ceph
     env_auth = false
     access_key_id = XXX
     secret_access_key = YYY
    -region = 
    +region =
     endpoint = https://ceph.endpoint.example.com
    -location_constraint = 
    -acl = 
    -server_side_encryption = 
    -storage_class = 
    +location_constraint = +acl = +server_side_encryption = +storage_class =

    Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.

    Eg the dump from Ceph looks something like this (irrelevant keys removed).

    {
    @@ -2973,6 +3222,8 @@ storage_class = 

    Dreamhost DreamObjects is an object storage system based on CEPH.

    To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

    [dreamobjects]
    +type = s3
    +provider = DreamHost
     env_auth = false
     access_key_id = your_access_key
     secret_access_key = your_secret_key
    @@ -2991,28 +3242,29 @@ storage_class =
    env_auth> 1 access_key_id> YOUR_ACCESS_KEY secret_access_key> YOUR_SECRET_KEY -region> +region> endpoint> nyc3.digitaloceanspaces.com -location_constraint> -acl> -storage_class> +location_constraint> +acl> +storage_class>

    The resulting configuration file should look like:

    [spaces]
     type = s3
    +provider = DigitalOcean
     env_auth = false
     access_key_id = YOUR_ACCESS_KEY
     secret_access_key = YOUR_SECRET_KEY
    -region = 
    +region =
     endpoint = nyc3.digitaloceanspaces.com
    -location_constraint = 
    -acl = 
    -server_side_encryption = 
    -storage_class = 
    +location_constraint = +acl = +server_side_encryption = +storage_class =

    Once configured, you can create a new Space and begin copying files. For example:

    rclone mkdir spaces:my-new-space
     rclone copy /path/to/files spaces:my-new-space

    IBM COS (S3)

    -

    Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (https://www.ibm.com/cloud/object-storage)

    +

    Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)

    To configure access to IBM COS S3, follow the steps below:

    1. Run rclone config and select n for a new remote.

      @@ -3023,136 +3275,116 @@ s) Set configuration password q) Quit config n/s/q> n
    2. Enter the name for the configuration

      -
      name> IBM-COS-XREGION
    3. +
      name> <YOUR NAME>
    4. Select "s3" storage.

      -
      Type of storage to configure.
      -Choose a number from below, or type in your own value
      - 1 / Amazon Drive
      +
      Choose a number from below, or type in your own value
      +1 / Alias for a existing remote
      +\ "alias"
      +2 / Amazon Drive
       \ "amazon cloud drive"
      -2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3))
      +3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
       \ "s3"
      -3 / Backblaze B2
      -Storage> 2
    5. -
    6. Select "Enter AWS credentials…"

      -
      Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
      +4 / Backblaze B2
      +\ "b2"
      +[snip]
      +23 / http Connection
      +\ "http"
      +Storage> 3
    7. +
    8. Select IBM COS as the S3 Storage Provider.

      +
      Choose the S3 provider.
       Choose a number from below, or type in your own value
      - 1 / Enter AWS credentials in the next step
      -\ "false"
      - 2 / Get AWS credentials from the environment (env vars or IAM)
      -\ "true"
      -env_auth> 1
    9. + 1 / Choose this option to configure Storage to AWS S3 + \ "AWS" + 2 / Choose this option to configure Storage to Ceph Systems + \ "Ceph" + 3 / Choose this option to configure Storage to Dreamhost + \ "Dreamhost" + 4 / Choose this option to the configure Storage to IBM COS S3 + \ "IBMCOS" + 5 / Choose this option to the configure Storage to Minio + \ "Minio" + Provider>4
    10. Enter the Access Key and Secret.

      AWS Access Key ID - leave blank for anonymous access or runtime credentials.
       access_key_id> <>
       AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
       secret_access_key> <>
    11. -
    12. Select "other-v4-signature" region.

      -
      Region to connect to.
      +
    13. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.

      +
      Endpoint for IBM COS S3 API.
      +Specify if using an IBM COS On Premise.
       Choose a number from below, or type in your own value
      -/ The default endpoint - a good choice if you are unsure.
      - 1 | US Region, Northern Virginia or Pacific Northwest.
      -| Leave location constraint empty.
      -\ "us-east-1"
      -/ US East (Ohio) Region
      -2 | Needs location constraint us-east-2.
      -\ "us-east-2"
      -/ US West (Oregon) Region
      -…<omitted>…
      -15 | eg Ceph/Dreamhost
      -| set this and make sure you set the endpoint.
      -\ "other-v2-signature"
      -/ If using an S3 clone that understands v4 signatures set this
      -16 | and make sure you set the endpoint.
      -\ "other-v4-signature
      -region> 16
    14. -
    15. Enter the endpoint FQDN.

      -
      Leave blank if using AWS to use the default endpoint for the region.
      -Specify if using an S3 clone such as Ceph.
      -endpoint> s3-api.us-geo.objectstorage.softlayer.net
    16. -
    17. Specify a IBM COS Location Constraint. -
        -
      1. Currently, the only IBM COS values for LocationConstraint are: us-standard / us-vault / us-cold / us-flex us-east-standard / us-east-vault / us-east-cold / us-east-flex us-south-standard / us-south-vault / us-south-cold / us-south-flex eu-standard / eu-vault / eu-cold / eu-flex

        -
        Location constraint - must be set to match the Region. Used when creating buckets only.
        -Choose a number from below, or type in your own value
        - 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
        -\ ""
        - 2 / US East (Ohio) Region.
        -\ "us-east-2"
        - …<omitted>…
        -location_constraint> us-standard
      2. -
    18. -
    19. Specify a canned ACL.

      + 1 / US Cross Region Endpoint + \ "s3-api.us-geo.objectstorage.softlayer.net" + 2 / US Cross Region Dallas Endpoint + \ "s3-api.dal.us-geo.objectstorage.softlayer.net" + 3 / US Cross Region Washington DC Endpoint + \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" + 4 / US Cross Region San Jose Endpoint + \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" + 5 / US Cross Region Private Endpoint + \ "s3-api.us-geo.objectstorage.service.networklayer.com" + 6 / US Cross Region Dallas Private Endpoint + \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" + 7 / US Cross Region Washington DC Private Endpoint + \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" + 8 / US Cross Region San Jose Private Endpoint + \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" + 9 / US Region East Endpoint + \ "s3.us-east.objectstorage.softlayer.net" +10 / US Region East Private Endpoint + \ "s3.us-east.objectstorage.service.networklayer.com" +11 / US Region South Endpoint +[snip] +34 / Toronto Single Site Private Endpoint + \ "s3.tor01.objectstorage.service.networklayer.com" +endpoint>1
    20. +
    21. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter

      +
       1 / US Cross Region Standard
      +   \ "us-standard"
      + 2 / US Cross Region Vault
      +   \ "us-vault"
      + 3 / US Cross Region Cold
      +   \ "us-cold"
      + 4 / US Cross Region Flex
      +   \ "us-flex"
      + 5 / US East Region Standard
      +   \ "us-east-standard"
      + 6 / US East Region Vault
      +   \ "us-east-vault"
      + 7 / US East Region Cold
      +   \ "us-east-cold"
      + 8 / US East Region Flex
      +   \ "us-east-flex"
      + 9 / US South Region Standard
      +   \ "us-south-standard"
      +10 / US South Region Vault
      +   \ "us-south-vault"
      +[snip]
      +32 / Toronto Flex
      +   \ "tor01-flex"
      +location_constraint>1
    22. +
    23. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.

      Canned ACL used when creating buckets and/or storing objects in S3.
       For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
       Choose a number from below, or type in your own value
      -1 / Owner gets FULL_CONTROL. No one else has access rights (default).
      -\ "private"
      -2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
      -\ "public-read"
      -/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
      - 3 | Granting this on a bucket is generally not recommended.
      -\ "public-read-write"
      - 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
      -\ "authenticated-read"
      -/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
      -5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
      -\ "bucket-owner-read"
      -/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
      - 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
      -\ "bucket-owner-full-control"
      +  1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
      +  \ "private"
      +  2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
      +  \ "public-read"
      +  3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
      +  \ "public-read-write"
      +  4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
      +  \ "authenticated-read"
       acl> 1
    24. -
    25. Set the SSE option to "None".

      -
      Choose a number from below, or type in your own value
      - 1 / None
      -\ ""
      -2 / AES256
      -\ "AES256"
      -server_side_encryption> 1
    26. -
    27. Set the storage class to "None" (IBM COS uses the LocationConstraint at the bucket level).

      -
      The storage class to use when storing objects in S3.
      -Choose a number from below, or type in your own value
      -1 / Default
      -\ ""
      - 2 / Standard storage class
      -\ "STANDARD"
      - 3 / Reduced redundancy storage class
      -\ "REDUCED_REDUNDANCY"
      - 4 / Standard Infrequent Access storage class
      - \ "STANDARD_IA"
      -storage_class>
    28. -
    29. Review the displayed configuration and accept to save the "remote" then quit.

      -
      Remote config
      ---------------------
      -[IBM-COS-XREGION]
      -env_auth = false
      -access_key_id = <>
      -secret_access_key = <>
      -region = other-v4-signature
      +
    30. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this

      +
      [xxx]
      +type = s3
      +Provider = IBMCOS
      +access_key_id = xxx
      +secret_access_key = yyy
       endpoint = s3-api.us-geo.objectstorage.softlayer.net
       location_constraint = us-standard
      -acl = private
      -server_side_encryption = 
      -storage_class =
      ---------------------
      -y) Yes this is OK
      -e) Edit this remote
      -d) Delete this remote
      -y/e/d> y
      -Remote config
      -Current remotes:
      -
      -Name                 Type
      -====                 ====
      -IBM-COS-XREGION      s3
      -
      -e) Edit existing remote
      -n) New remote
      -d) Delete remote
      -r) Rename remote
      -c) Copy remote
      -s) Set configuration password
      -q) Quit config
      -e/n/d/r/c/s/q> q
    31. +acl = private
    32. Execute rclone commands

      1)  Create a bucket.
           rclone mkdir IBM-COS-XREGION:newbucket
      @@ -3205,6 +3437,8 @@ location_constraint>
       server_side_encryption>

      Which makes the config file look like this

      [minio]
      +type = s3
      +provider = Minio
       env_auth = false
       access_key_id = USWUXHGYZQYFYFFIT3RE
       secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
      @@ -3258,21 +3492,21 @@ Choose a number from below, or type in your own value
        1 / Empty for US Region, Northern Virginia or Pacific Northwest.
          \ ""
       [snip]
      -location_constraint> 
      +location_constraint>
       Canned ACL used when creating buckets and/or storing objects in S3.
       For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
       Choose a number from below, or type in your own value
        1 / Owner gets FULL_CONTROL. No one else has access rights (default).
          \ "private"
       [snip]
      -acl> 
      +acl>
       The server-side encryption algorithm used when storing this object in S3.
       Choose a number from below, or type in your own value
        1 / None
          \ ""
        2 / AES256
          \ "AES256"
      -server_side_encryption> 
      +server_side_encryption>
       The storage class to use when storing objects in S3.
       Choose a number from below, or type in your own value
        1 / Default
      @@ -3283,7 +3517,7 @@ Choose a number from below, or type in your own value
          \ "REDUCED_REDUNDANCY"
        4 / Standard Infrequent Access storage class
          \ "STANDARD_IA"
      -storage_class> 
      +storage_class>
       Remote config
       --------------------
       [wasabi]
      @@ -3292,10 +3526,10 @@ access_key_id = YOURACCESSKEY
       secret_access_key = YOURSECRETACCESSKEY
       region = us-east-1
       endpoint = s3.wasabisys.com
      -location_constraint = 
      -acl = 
      -server_side_encryption = 
      -storage_class = 
      +location_constraint =
      +acl =
      +server_side_encryption =
      +storage_class =
       --------------------
       y) Yes this is OK
       e) Edit this remote
      @@ -3303,15 +3537,17 @@ d) Delete this remote
       y/e/d> y

      This will leave the config file looking like this.

      [wasabi]
      +type = s3
      +provider = Wasabi
       env_auth = false
       access_key_id = YOURACCESSKEY
       secret_access_key = YOURSECRETACCESSKEY
      -region = us-east-1
      +region =
       endpoint = s3.wasabisys.com
      -location_constraint = 
      -acl = 
      -server_side_encryption = 
      -storage_class = 
      +location_constraint = +acl = +server_side_encryption = +storage_class =

      Backblaze B2

      B2 is Backblaze's cloud storage system.

      Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

      @@ -3792,7 +4028,7 @@ chunk_total_size = 10G

      Flag to clear all the cached data for this remote before.

      Default: not set

      --cache-chunk-size=SIZE

      -

      The size of a chunk (partial file data). Use lower numbers for slower connections.

      +

      The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.

      Default: 5M

      --cache-total-chunk-size=SIZE

      The total size that the chunks can take up on the local disk. If cache exceeds this value then it will start to the delete the oldest chunks until it goes under this value.

      @@ -4169,7 +4405,7 @@ y/e/d> y

      Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.

      Limitations

      Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      -

      There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.

      +

      There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail.

      If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

      FTP

      FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.

      @@ -4422,7 +4658,7 @@ y/e/d> y

      Service Account support

      You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

      To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

      -

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.

      +

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

      --fast-list

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

      Modified time

      @@ -4538,7 +4774,7 @@ y/e/d> y

      Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.

      Service Account support

      You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

      -

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based authentication flow.

      +

      To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

      Use case - Google Apps/G-suite account and individual Drive

      Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com.

      There's a few steps we need to go through to accomplish this:

      @@ -4626,6 +4862,8 @@ y/e/d> y

      By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false flag, or set the equivalent environment variable.

      Emptying trash

      If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

      +

      Quota information

      +

      To view your current quota you can use the rclone about remote: command which will display your usage limit (quota), the usage in Google Drive, the size of all files in the Trash and the space used by other Google services such as Gmail. This command does not take any path arguments.

      Specific options

      Here are the command line options specific to this cloud storage system.

      --drive-auth-owner-only

      @@ -4984,6 +5222,69 @@ y/e/d> y

      Limitations

      This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

      The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

      +

      Mega

      +

      Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.

      +

      This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

      +

      Paths are specified as remote:path

      +

      Paths may be as deep as required, eg remote:directory/subdirectory.

      +

      Here is an example of how to make a remote called remote. First run:

      +
       rclone config
      +

      This will guide you through an interactive setup process:

      +
      No remotes found - make a new one
      +n) New remote
      +s) Set configuration password
      +q) Quit config
      +n/s/q> n
      +name> remote
      +Type of storage to configure.
      +Choose a number from below, or type in your own value
      + 1 / Alias for a existing remote
      +   \ "alias"
      +[snip]
      +14 / Mega
      +   \ "mega"
      +[snip]
      +23 / http Connection
      +   \ "http"
      +Storage> mega
      +User name
      +user> you@example.com
      +Password.
      +y) Yes type in my own password
      +g) Generate random password
      +n) No leave this optional password blank
      +y/g/n> y
      +Enter the password:
      +password:
      +Confirm the password:
      +password:
      +Remote config
      +--------------------
      +[remote]
      +type = mega
      +user = you@example.com
      +pass = *** ENCRYPTED ***
      +--------------------
      +y) Yes this is OK
      +e) Edit this remote
      +d) Delete this remote
      +y/e/d> y
      +

      Once configured you can then use rclone like this,

      +

      List directories in top level of your Mega

      +
      rclone lsd remote:
      +

      List all the files in your Mega

      +
      rclone ls remote:
      +

      To copy a local directory to an Mega directory called backup

      +
      rclone copy /home/source remote:backup
      +

      Modified time and hashes

      +

      Mega does not support modification times or hashes yet.

      +

      Duplicated files

      +

      Mega can have two files with exactly the same name and path (unlike a normal file system).

      +

      Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

      +

      Use rclone dedupe to fix duplicated files.

      +

      Limitations

      +

      This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

      +

      Mega allows duplicate files which may confuse rclone.

      Microsoft Azure Blob Storage

      Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

      Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

      @@ -5074,7 +5375,7 @@ y/e/d> y

      Cutoff for switching to chunked upload - must be <= 256MB. The default is 256MB.

      --azureblob-chunk-size=SIZE

      Upload chunk size. Default 4MB. Note that this is stored in memory and there may be up to --transfers chunks stored at once in memory. This can be at most 100MB.

      -

      Limitations

      +

      Limitations

      MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

      Microsoft OneDrive

      Paths are specified as remote:path

      @@ -5166,16 +5467,17 @@ b) Business p) Personal b/p>

      After that rclone requires an authentication of your account. The application will first authenticate your account, then query the OneDrive resource URL and do a second (silent) authentication for this resource URL.

      -

      Modified time and hashes

      +

      Modified time and hashes

      OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

      -

      One drive supports SHA1 type hashes, so you can use --checksum flag.

      +

      OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.

      +

      For all types of OneDrive you can use the --checksum flag.

      Deleting files

      Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

      Specific options

      Here are the command line options specific to this cloud storage system.

      --onedrive-chunk-size=SIZE

      Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.

      -

      Limitations

      +

      Limitations

      Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

      The largest allowed file size is 10GiB (10,737,418,240 bytes).

      @@ -5491,6 +5793,9 @@ export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:

      --fast-list

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

      +

      --update and --use-server-modtime

      +

      As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

      +

      For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

      Specific options

      Here are the command line options specific to this cloud storage system.

      --swift-chunk-size=SIZE

      @@ -5498,7 +5803,7 @@ rclone lsd myremote:

      Modified time

      The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

      This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

      -

      Limitations

      +

      Limitations

      The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

      Troubleshooting

      Rclone gives Failed to create file system for "remote:": Bad Request

      @@ -5595,15 +5900,16 @@ y/e/d> y
      rclone ls remote:

      To copy a local directory to an pCloud directory called backup

      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      +

      Modified time and hashes

      pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

      pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum flag.

      Deleting files

      Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.

      SFTP

      SFTP is the Secure (or SSH) File Transfer Protocol.

      -

      It runs over SSH v2 and is standard with most modern SSH installations.

      +

      SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

      Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

      +

      Note that some SFTP servers will need the leading / - Synology is a good example of this.

      Here is an example of making an SFTP configuration. First run

      rclone config

      This will guide you through an interactive setup process.

      @@ -5708,8 +6014,9 @@ y/e/d> y

      Modified times are stored on the server to 1 second precision.

      Modified times are used in syncing and are fully supported.

      Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

      -

      Limitations

      -

      SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote check can be disabled by setting the configuration option disable_hashcheck. This may be required if you're connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited.

      +

      Limitations

      +

      SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.

      +

      Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

      The only ssh agent supported under Windows is Putty's pageant.

      The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).

      SFTP isn't supported under plan9 until this issue is fixed.

      @@ -5782,7 +6089,9 @@ Choose a number from below, or type in your own value \ "nextcloud" 2 / Owncloud \ "owncloud" - 3 / Other site/service or software + 3 / Sharepoint + \ "sharepoint" + 4 / Other site/service or software \ "other" vendor> 1 User name @@ -5815,7 +6124,7 @@ y/e/d> y
      rclone ls remote:

      To copy a local directory to an WebDAV directory called backup

      rclone copy /home/source remote:backup
      -

      Modified time and hashes

      +

      Modified time and hashes

      Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.

      Hashes are not supported.

      Owncloud

      @@ -5835,6 +6144,23 @@ user = YourUserName pass = encryptedpassword

      If you are using put.io with rclone mount then use the --read-only flag to signal to the OS that it can't write to the mount.

      For more help see the put.io webdav docs.

      +

      Sharepoint

      +

      Can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975

      +

      This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav.

      +

      To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL:

      +
        +
      • Go here to open your OneDrive or to sign in
      • +
      • Now take a look at your address bar, the URL should look like this: https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx
      • +
      +

      You'll only need this URL upto the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive.

      +

      Add the remote to rclone like this: Configure the url as https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents and use your normal account email and password for user and pass. If you have 2FA enabled, you have to generate an app password. Set the vendor to sharepoint.

      +

      Your config file should look like this:

      +
      [sharepoint]
      +type = webdav
      +url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
      +vendor = other
      +user = YourEmailAddress
      +pass = encryptedpassword

      Yandex Disk

      Yandex Disk is a cloud storage solution created by Yandex.

      Yandex paths may be as deep as required, eg remote:directory/subdirectory.

      @@ -5969,6 +6295,10 @@ nounc = true 6 two/three 6 b/two 6 b/one +

      --local-no-check-updated

      +

      Don't check to see if the files change during upload.

      +

      Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts can't copy - source file is being updated if the file changes during upload.

      +

      However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag.

      --local-no-unicode-normalization

      This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.

      --one-file-system, -x

      @@ -5996,6 +6326,102 @@ nounc = true

      This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.

      Changelog

        +
      • v1.41 - 2018-04-28 +
          +
        • New backends
        • +
        • Mega support added
        • +
        • Webdav now supports SharePoint cookie authentication (hensur)
        • +
        • New commands
        • +
        • link: create public link to files and folders (Stefan Breunig)
        • +
        • about: gets quota info from a remote (a-roussos, ncw)
        • +
        • hashsum: a generic tool for any hash to produce md5sum like output
        • +
        • New Features
        • +
        • lsd: Add -R flag and fix and update docs for all ls commands
        • +
        • ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb)
        • +
        • serve restic: Add append-only mode (Steve Kriss)
        • +
        • serve restic: Disallow overwriting files in append-only mode (Alexander Neumann)
        • +
        • serve restic: Print actual listener address (Matt Holt)
        • +
        • size: Add --json flag (Matthew Holt)
        • +
        • sync: implement --ignore-errors (Mateusz Pabian)
        • +
        • dedupe: Add dedupe largest functionality (Richard Yang)
        • +
        • fs: Extend SizeSuffix to include TB and PB for rclone about
        • +
        • fs: add --dump goroutines and --dump openfiles for debugging
        • +
        • rc: implement core/memstats to print internal memory usage info
        • +
        • rc: new call rc/pid (Michael P. Dubner)
        • +
        • Compile
        • +
        • Drop support for go1.6
        • +
        • Release
        • +
        • Fix make tarball (Chih-Hsuan Yen)
        • +
        • Bug Fixes
        • +
        • filter: fix --min-age and --max-age together check
        • +
        • fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport
        • +
        • lsd,lsf: make sure all times we output are in local time
        • +
        • rc: fix setting bwlimit to unlimited
        • +
        • rc: take note of the --rc-addr flag too as per the docs
        • +
        • Mount
        • +
        • Use About to return the correct disk total/used/free (eg in df)
        • +
        • Set --attr-timeout default to 1s - fixes: +
            +
          • rclone using too much memory
          • +
          • rclone not serving files to samba
          • +
          • excessive time listing directories
          • +
        • +
        • Fix df -i (upstream fix)
        • +
        • VFS
        • +
        • Filter files . and .. from directory listing
        • +
        • Only make the VFS cache if --vfs-cache-mode > Off
        • +
        • Local
        • +
        • Add --local-no-check-updated to disable updated file checks
        • +
        • Retry remove on Windows sharing violation error
        • +
        • Cache
        • +
        • Flush the memory cache after close
        • +
        • Purge file data on notification
        • +
        • Always forget parent dir for notifications
        • +
        • Integrate with Plex websocket
        • +
        • Add rc cache/stats (seuffert)
        • +
        • Add info log on notification
        • +
        • Box
        • +
        • Fix failure reading large directories - parse file/directory size as float
        • +
        • Dropbox
        • +
        • Fix crypt+obfuscate on dropbox
        • +
        • Fix repeatedly uploading the same files
        • +
        • FTP
        • +
        • Work around strange response from box FTP server
        • +
        • More workarounds for FTP servers to fix mkParentDir error
        • +
        • Fix no error on listing non-existent directory
        • +
        • Google Cloud Storage
        • +
        • Add service_account_credentials (Matt Holt)
        • +
        • Detect bucket presence by listing it - minimises permissions needed
        • +
        • Ignore zero length directory markers
        • +
        • Google Drive
        • +
        • Add service_account_credentials (Matt Holt)
        • +
        • Fix directory move leaving a hardlinked directory behind
        • +
        • Return proper google errors when Opening files
        • +
        • When initialized with a filepath, optional features used incorrect root path (Stefan Breunig)
        • +
        • HTTP
        • +
        • Fix sync for servers which don't return Content-Length in HEAD
        • +
        • Onedrive
        • +
        • Add QuickXorHash support for OneDrive for business
        • +
        • Fix socket leak in multipart session upload
        • +
        • S3
        • +
        • Look in S3 named profile files for credentials
        • +
        • Add --s3-disable-checksum to disable checksum uploading (Chris Redekop)
        • +
        • Hierarchical configuration support (Giri Badanahatti)
        • +
        • Add in config for all the supported S3 providers
        • +
        • Add One Zone Infrequent Access storage class (Craig Rachel)
        • +
        • Add --use-server-modtime support (Peter Baumgartner)
        • +
        • Add --s3-chunk-size option to control multipart uploads
        • +
        • Ignore zero length directory markers
        • +
        • SFTP
        • +
        • Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll)
        • +
        • Update docs with Synology quirks
        • +
        • Fail soft with a debug on hash failure
        • +
        • Swift
        • +
        • Add --use-server-modtime support (Peter Baumgartner)
        • +
        • Webdav
        • +
        • Support SharePoint cookie authentication (hensur)
        • +
        • Strip leading and trailing / off root
        • +
      • v1.40 - 2018-03-19

        Contact the rclone project

        Forum

        diff --git a/MANUAL.md b/MANUAL.md index b7da0e536..0fd83dfbc 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Mar 19, 2018 +% Apr 28, 2018 Rclone ====== @@ -24,6 +24,7 @@ Rclone is a command line program to sync files and directories to and from: * Hubic * IBM COS S3 * Memset Memstore +* Mega * Microsoft Azure Blob Storage * Microsoft OneDrive * Minio @@ -31,7 +32,7 @@ Rclone is a command line program to sync files and directories to and from: * OVH * Openstack Swift * Oracle Cloud Storage -* Ownloud +* ownCloud * pCloud * put.io * QingStor @@ -198,6 +199,7 @@ See the following for detailed instructions for * [Google Drive](https://rclone.org/drive/) * [HTTP](https://rclone.org/http/) * [Hubic](https://rclone.org/hubic/) + * [Mega](https://rclone.org/mega/) * [Microsoft Azure Blob Storage](https://rclone.org/azureblob/) * [Microsoft OneDrive](https://rclone.org/onedrive/) * [Openstack Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/) @@ -518,6 +520,15 @@ List the objects in the path with size and path. Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. +Eg + + $ rclone ls swift:bucket + 60295 bevajer5jef + 90613 canole + 94467 diwogej7 + 37600 fubuwic + + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -532,9 +543,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -554,8 +569,27 @@ List all directories/containers/buckets in the path. ### Synopsis -Lists the directories in the source path to standard output. Recurses -by default. +Lists the directories in the source path to standard output. Does not +recurse by default. Use the -R flag to recurse. + +This command lists the total size of the directory (if known, -1 if +not), the modification time (if known, the current time if not), the +number of objects in the directory (if known, -1 if not) and the name +of the directory, Eg + + $ rclone lsd swift: + 494000 2018-04-26 08:43:20 10000 10000files + 65 2018-04-26 08:43:20 1 1File + +Or + + $ rclone lsd drive:test + -1 2016-10-17 17:41:53 -1 1000files + -1 2017-01-03 14:40:54 -1 2500files + -1 2017-07-08 14:39:28 -1 4000files + +If you just want the directory names use "rclone lsf --dirs-only". + Any of the filtering options can be applied to this commmand. @@ -571,9 +605,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -583,7 +621,8 @@ rclone lsd remote:path [flags] ### Options ``` - -h, --help help for lsd + -h, --help help for lsd + -R, --recursive Recurse into the listing. ``` ## rclone lsl @@ -596,6 +635,15 @@ List the objects in path with modification time, size and path. Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. +Eg + + $ rclone lsl swift:bucket + 60295 2016-06-25 18:55:41.062626927 bevajer5jef + 90613 2016-06-25 18:55:43.302607074 canole + 94467 2016-06-25 18:55:43.046609333 diwogej7 + 37600 2016-06-25 18:55:40.814629136 fubuwic + + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -610,9 +658,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -683,6 +735,7 @@ rclone size remote:path [flags] ``` -h, --help help for size + --json format output as JSON ``` ## rclone version @@ -800,6 +853,7 @@ Dedupe can be run non interactively using the `--dedupe-mode` flag or by using a * `--dedupe-mode first` - removes identical files then keeps the first one. * `--dedupe-mode newest` - removes identical files then keeps the newest one. * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. + * `--dedupe-mode largest` - removes identical files then keeps the largest one. * `--dedupe-mode rename` - removes identical files then renames the rest to be different. For example to rename all the identically named photos in your Google Photos directory, do @@ -822,6 +876,68 @@ rclone dedupe [mode] remote:path [flags] -h, --help help for dedupe ``` +## rclone about + +Get quota information from the remote. + +### Synopsis + + +Get quota information from the remote, like bytes used/free/quota and bytes +used in the trash. Not supported by all remotes. + +This will print to stdout something like this: + + Total: 17G + Used: 7.444G + Free: 1.315G + Trashed: 100.000M + Other: 8.241G + +Where the fields are: + + * Total: total size available. + * Used: total size used + * Free: total amount this user could upload. + * Trashed: total amount in the trash + * Other: total amount in other storage (eg Gmail, Google Photos) + * Objects: total number of objects in the storage + +Note that not all the backends provide all the fields - they will be +missing if they are not known for that backend. Where it is known +that the value is unlimited the value will also be omitted. + +Use the --full flag to see the numbers written out in full, eg + + Total: 18253611008 + Used: 7993453766 + Free: 1411001220 + Trashed: 104857602 + Other: 8849156022 + +Use the --json flag for a computer readable output, eg + + { + "total": 18253611008, + "used": 7993453766, + "trashed": 104857602, + "other": 8849156022, + "free": 1411001220 + } + + +``` +rclone about remote: [flags] +``` + +### Options + +``` + --full Full numbers instead of SI units + -h, --help help for about + --json Format output as JSON +``` + ## rclone authorize Remote authorization. @@ -1334,6 +1450,69 @@ rclone gendocs output_directory [flags] -h, --help help for gendocs ``` +## rclone hashsum + +Produces an hashsum file for all the objects in the path. + +### Synopsis + + +Produces a hash file for all the objects in the path using the hash +named. The output is in the same format as the standard +md5sum/sha1sum tool. + +Run without a hash to see the list of supported hashes, eg + + $ rclone hashsum + Supported hashes are: + * MD5 + * SHA-1 + * DropboxHash + * QuickXorHash + +Then + + $ rclone hashsum MD5 remote:path + + +``` +rclone hashsum remote:path [flags] +``` + +### Options + +``` + -h, --help help for hashsum +``` + +## rclone link + +Generate public link to file/folder. + +### Synopsis + + +rclone link will create or retrieve a public link to the given file or folder. + + rclone link remote:path/to/file + rclone link remote:path/to/folder/ + +If successful, the last line of the output will contain the link. Exact +capabilities depend on the remote, but the link will always be created with +the least constraints – e.g. no expiry, no password protection, accessible +without account. + + +``` +rclone link remote:path [flags] +``` + +### Options + +``` + -h, --help help for link +``` + ## rclone listremotes List all the remotes in the config file. @@ -1369,6 +1548,15 @@ standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. +Eg + + $ rclone lsf swift:bucket + bevajer5jef + canole + diwogej7 + ferejej3gux/ + fubuwic + Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: @@ -1381,6 +1569,15 @@ output: So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. +Eg + + $ rclone lsf --format "tsp" swift:bucket + 2016-06-25 18:55:41;60295;bevajer5jef + 2016-06-25 18:55:43;90613;canole + 2016-06-25 18:55:43;94467;diwogej7 + 2018-04-26 08:50:45;0;ferejej3gux/ + 2016-06-25 18:55:40;37600;fubuwic + If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object @@ -1392,12 +1589,31 @@ For example to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . +Eg + + $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket + 7908e352297f0f530b84a756f188baa3 bevajer5jef + cd65ac234e6fea5925974a51cdd865cc canole + 03b5341b4f234b9d984d03ad076bae91 diwogej7 + 8fd37c3810dd660778137ac3a66cc06d fubuwic + 99713e14a4c4ff553acaf1930fad985b gixacuh7ku + (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. +Eg + + $ rclone lsf --separator "," --format "tshp" swift:bucket + 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef + 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole + 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 + 2018-04-26 08:52:53,0,,ferejej3gux/ + 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic + + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -1412,9 +1628,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -1488,9 +1708,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -1602,12 +1826,30 @@ for solutions to make mount mount more reliable. You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. -The default is 0s - no caching - which is recommended for filesystems -which can change outside the control of the kernel. +The default is "1s" which caches files just long enough to avoid +too many callbacks to rclone from the kernel. -If you set it higher ('1s' or '1m' say) then the kernel will call back -to rclone less often making it more efficient, however there may be -strange effects when files change on the remote. +In theory 0s should be the correct value for filesystems which can +change outside the control of the kernel. However this causes quite a +few problems such as +[rclone using too much memory](https://github.com/ncw/rclone/issues/2157), +[rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) +and [excessive time listing directories](https://github.com/ncw/rclone/issues/2095#issuecomment-371141147). + +The kernel can cache the info about a file for the time given by +"--attr-timeout". You may see corruption if the remote file changes +length during this window. It will show up as either a truncated file +or a file with garbage on the end. With "--attr-timeout 1s" this is +very unlikely but not impossible. The higher you set "--attr-timeout" +the more likely it is. The default setting of "1s" is the lowest +setting which mitigates the problems above. + +If you set it higher ('10s' or '1m' say) then the kernel will call +back to rclone less often making it more efficient, however there is +more chance of the corruption issue above. + +If files don't change on the remote outside of the control of rclone +then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. @@ -1748,7 +1990,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --allow-non-empty Allow mounting over a non-empty directory. --allow-other Allow access to other users. --allow-root Allow access to root user. - --attr-timeout duration Time for which file/directory attributes are cached. + --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --daemon Run mount as a daemon (background mode). --debug-fuse Debug the FUSE internals - needs -v. --default-permissions Makes kernel enforce access control based on the file mode. @@ -1844,6 +2086,7 @@ Here are the keys - press '?' to toggle the help on and off c toggle counts g toggle graph n,s,C sort by name,size,count + ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit @@ -2022,10 +2265,11 @@ control the stats printing. Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. +IPs. By default it only listens on localhost. You can use port +:0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time @@ -2298,10 +2542,11 @@ these **must** end with /. Eg Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. +IPs. By default it only listens on localhost. You can use port +:0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time @@ -2352,6 +2597,7 @@ rclone serve restic remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --append-only disallow deletion of repository data --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic @@ -2385,10 +2631,11 @@ which is undesirable: see https://github.com/golang/go/issues/22577 Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. +IPs. By default it only listens on localhost. You can use port +:0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time @@ -2784,9 +3031,9 @@ fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Options which use SIZE use kByte by default. However, a suffix of `b` -for bytes, `k` for kBytes, `M` for MBytes and `G` for GBytes may be -used. These are the binary units, eg 1, 2\*\*10, 2\*\*20, 2\*\*30 -respectively. +for bytes, `k` for kBytes, `M` for MBytes, `G` for GBytes, `T` for +TBytes and `P` for PBytes may be used. These are the binary units, eg +1, 2\*\*10, 2\*\*20, 2\*\*30 respectively. ### --backup-dir=DIR ### @@ -3032,6 +3279,10 @@ This can be useful for tracking down problems with syncs in combination with the `-v` flag. See the [Logging section](#logging) for more info. +Note that if you are using the `logrotate` program to manage rclone's +logs, then you should use the `copytruncate` option as rclone doesn't +have a signal to rotate logs. + ### --log-level LEVEL ### This sets the log level for rclone. The default log level is `NOTICE`. @@ -3357,6 +3608,19 @@ This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a `--size-only` check and faster than using `--checksum`. +### --use-server-modtime ### + +Some object-store backends (e.g, Swift, S3) do not preserve file modification +times (modtime). On these backends, rclone stores the original modtime as +additional metadata on the object. By default it will make an API call to +retrieve the metadata when the modtime is needed by an operation. + +Use this flag to disable the extra API call and rely instead on the server's +modified time. In cases such as a local to remote sync, knowing the local file +is newer than the time it was last uploaded to the remote is sufficient. In +those cases, this flag can speed up the process and reduce the number of API +calls necessary. + ### -v, -vv, --verbose ### With `-v` rclone will tell you about each file that is transferred and @@ -3511,6 +3775,17 @@ only. Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on. +#### --dump goroutines #### + +This dumps a list of the running go-routines at the end of the command +to standard output. + +#### --dump openfiles #### + +This dumps a list of the open files at the end of the command. It +uses the `lsof` command to do that so you'll need that installed to +use it. + ### --memprofile=FILE ### Write memory profile to file. This can be analysed with `go tool pprof`. @@ -4287,6 +4562,22 @@ Run `rclone rc` on its own to see the help for the installed remote control commands. ## Supported commands + +### cache/expire: Purge a remote from cache + +Purge a remote from the cache backend. Supports either a directory or a file. +Params: + - remote = path to remote (required) + - withData = true/false to delete cached data (chunks) as well (optional) + +Eg + + rclone rc cache/expire remote=path/to/sub/folder/ + rclone rc cache/expire remote=/ withData=true + +### cache/stats: Get cache stats + +Show statistics for the cache remote. ### core/bwlimit: Set the bandwidth limit. @@ -4294,16 +4585,44 @@ This sets the bandwidth limit to that passed in. Eg - rclone core/bwlimit rate=1M - rclone core/bwlimit rate=off + rclone rc core/bwlimit rate=1M + rclone rc core/bwlimit rate=off -### cache/expire: Purge a remote from cache +The format of the parameter is exactly the same as passed to --bwlimit +except only one bandwidth may be specified. -Purge a remote from the cache backend. Supports either a directory or a file. -Params: +### core/memstats: Returns the memory statistics - - remote = path to remote (required) - - withData = true/false to delete cached data (chunks) as well (optional) +This returns the memory statistics of the running program. What the values mean +are explained in the go docs: https://golang.org/pkg/runtime/#MemStats + +The most interesting values for most people are: + +* HeapAlloc: This is the amount of memory rclone is actually using +* HeapSys: This is the amount of memory rclone has obtained from the OS +* Sys: this is the total amount of memory requested from the OS + * It is virtual memory so may include unused memory + +### core/pid: Return PID of current process + +This returns PID of current process. +Useful for stopping rclone process. + +### rc/error: This returns an error + +This returns an error with the input as part of its error string. +Useful for testing error handling. + +### rc/list: List all the registered remote control commands + +This lists all the registered remote control commands as a JSON map in +the commands response. + +### rc/noop: Echo the input to the output parameters + +This echoes the input parameters to the output parameters for testing +purposes. It can be used to check that rclone is still alive and to +check that parameter passing is working properly. ### vfs/forget: Forget files or directories in the directory cache. @@ -4321,21 +4640,7 @@ starting with dir will forget that dir, eg rclone rc vfs/forget file=hello file2=goodbye dir=home/junk -### rc/noop: Echo the input to the output parameters - -This echoes the input parameters to the output parameters for testing -purposes. It can be used to check that rclone is still alive and to -check that parameter passing is working properly. - -### rc/error: This returns an error - -This returns an error with the input as part of its error string. -Useful for testing error handling. - -### rc/list: List all the registered remote control commands - -This lists all the registered remote control commands as a JSON map in -the commands response. + ## Accessing the remote control via HTTP @@ -4483,8 +4788,9 @@ Here is an overview of the major features of each cloud storage system. | Google Drive | MD5 | Yes | No | Yes | R/W | | HTTP | - | No | No | No | R | | Hubic | MD5 | Yes | No | No | R/W | +| Mega | - | No | No | Yes | - | | Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W | -| Microsoft OneDrive | SHA1 | Yes | Yes | No | R | +| Microsoft OneDrive | SHA1 ‡‡ | Yes | Yes | No | R | | Openstack Swift | MD5 | Yes | No | No | R/W | | pCloud | MD5, SHA1 | Yes | No | No | W | | QingStor | MD5 | No | No | No | R/W | @@ -4512,6 +4818,10 @@ or `sha1sum` as well as `echo` are in the remote's PATH. †† WebDAV supports modtimes when used with Owncloud and Nextcloud only. +‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive +for business and SharePoint server support Microsoft's own +[QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). + ### ModTime ### The cloud storage system supports setting modification times on @@ -4575,27 +4885,28 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. -| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | -| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:| -| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | -| Amazon S3 | No | Yes | No | No | No | Yes | Yes | -| Backblaze B2 | No | No | No | No | Yes | Yes | Yes | -| Box | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | -| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | -| FTP | No | No | Yes | Yes | No | No | Yes | -| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | -| Google Drive | Yes | Yes | Yes | Yes | Yes | No | Yes | -| HTTP | No | No | No | No | No | No | No | -| Hubic | Yes † | Yes | No | No | No | Yes | Yes | -| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | -| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | -| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | -| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | -| QingStor | No | Yes | No | No | No | Yes | No | -| SFTP | No | No | Yes | Yes | No | No | Yes | -| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | -| Yandex Disk | Yes | No | No | No | Yes | Yes | Yes | -| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | +| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | +| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:| +| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Amazon S3 | No | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Backblaze B2 | No | No | No | No | Yes | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Box | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | Yes | Yes | +| FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Google Drive | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | +| HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | +| Mega | Yes | No | Yes | Yes | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | +| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | +| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | +| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | +| QingStor | No | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| Yandex Disk | Yes | No | No | No | Yes | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | +| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes | ### Purge ### @@ -4653,6 +4964,20 @@ Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. `rclone rcat`. +### LinkSharing ### + +Sets the necessary permissions on a file or folder and prints a link +that allows others to access them, even if they don't have an account +on the particular cloud provider. + +### About ### + +This is used to fetch quota information from the remote, like bytes +used/free/quota and bytes used in the trash. + +If the server can't do `About` then `rclone about` will return an +error. + Alias ----------------------------------------- @@ -4994,12 +5319,44 @@ failure. To avoid this problem, use `--max-size 50000M` option to limit the maximum size of uploaded files. Note that `--max-size` does not split files into segments, it only ignores files over this size. -Amazon S3 ---------------------------------------- +Amazon S3 Storage Providers +-------------------------------------------------------- + +The S3 backend can be used with a number of different providers: + +* AWS S3 +* Ceph +* DigitalOcean Spaces +* Dreamhost +* IBM COS S3 +* Minio +* Wasabi Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. +Once you have made a remote (see the provider specific section above) +you can use it like this: + +See all buckets + + rclone lsd remote: + +Make a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync `/home/local/directory` to the remote bucket, deleting any excess +files in the bucket. + + rclone sync /home/local/directory remote:bucket + +## AWS S3 {#amazon-s3} + Here is an example of making an s3 configuration. First run rclone config @@ -5019,7 +5376,7 @@ Choose a number from below, or type in your own value \ "alias" 2 / Amazon Drive \ "amazon cloud drive" - 3 / Amazon S3 (also Dreamhost, Ceph, Minio) + 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) \ "s3" 4 / Backblaze B2 \ "b2" @@ -5027,6 +5384,25 @@ Choose a number from below, or type in your own value 23 / http Connection \ "http" Storage> s3 +Choose your S3 provider. +Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + 2 / Ceph Object Storage + \ "Ceph" + 3 / Digital Ocean Spaces + \ "DigitalOcean" + 4 / Dreamhost DreamObjects + \ "Dreamhost" + 5 / IBM COS S3 + \ "IBMCOS" + 6 / Minio Object Storage + \ "Minio" + 7 / Wasabi Object Storage + \ "Wasabi" + 8 / Any other S3 compatible provider + \ "Other" +provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step @@ -5038,7 +5414,7 @@ AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY -Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. +Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. @@ -5083,13 +5459,9 @@ Choose a number from below, or type in your own value / South America (Sao Paulo) Region 14 | Needs location constraint sa-east-1. \ "sa-east-1" - / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. -15 | Set this and make sure you set the endpoint. - \ "other-v2-signature" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. -Specify if using an S3 clone such as Ceph. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value @@ -5158,10 +5530,14 @@ Choose a number from below, or type in your own value \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" + 5 / One Zone Infrequent Access storage class + \ "ONEZONE_IA" storage_class> 1 Remote config -------------------- [remote] +type = s3 +provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY @@ -5175,34 +5551,28 @@ storage_class = y) Yes this is OK e) Edit this remote d) Delete this remote -y/e/d> y +y/e/d> ``` -This remote is called `remote` and can now be used like this - -See all buckets - - rclone lsd remote: - -Make a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync `/home/local/directory` to the remote bucket, deleting any excess -files in the bucket. - - rclone sync /home/local/directory remote:bucket - ### --fast-list ### This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. +### --update and --use-server-modtime ### + +As noted below, the modified time is stored on metadata on the object. It is +used by default for all operations that require checking the time a file was +last updated. It allows rclone to treat the remote more like a true filesystem, +but it is inefficient because it requires an extra API call to retrieve the +metadata. + +For many operations, the time the object was last uploaded to the remote is +sufficient to determine if it is "dirty". By using `--update` along with +`--use-server-modtime`, you can avoid the extra API call and simply upload +files whose local modtime is newer than the time it was last uploaded. + ### Modified time ### The modified time is stored as metadata on the object as @@ -5223,27 +5593,34 @@ you will get an error, `incorrect region, the bucket is not in 'XXX' region`. ### Authentication ### -There are two ways to supply `rclone` with a set of AWS -credentials. In order of precedence: - - Directly in the rclone configuration file (as configured by `rclone config`) - - set `access_key_id` and `secret_access_key`. `session_token` can be - optionally set when using AWS STS. - - Runtime configuration: - - set `env_auth` to `true` in the config file - - Exporting the following environment variables before running `rclone` +There are a number of ways to supply `rclone` with a set of AWS +credentials, with and without using the environment. + +The different authentication methods are tried in this order: + + - Directly in the rclone configuration file (`env_auth = false` in the config file): + - `access_key_id` and `secret_access_key` are required. + - `session_token` can be optionally set when using AWS STS. + - Runtime configuration (`env_auth = true` in the config file): + - Export the following environment variables before running `rclone`: - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` - - Session Token: `AWS_SESSION_TOKEN` - - Running `rclone` in an ECS task with an IAM role - - Running `rclone` on an EC2 instance with an IAM role + - Session Token: `AWS_SESSION_TOKEN` (optional) + - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html): + - Profile files are standard files used by AWS CLI tools + - By default it will use the profile in your home directory (eg `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables: + - `AWS_SHARED_CREDENTIALS_FILE` to control which file. + - `AWS_PROFILE` to control which profile to use. + - Or, run `rclone` in an ECS task with an IAM role (AWS only). + - Or, run `rclone` on an EC2 instance with an IAM role (AWS only). If none of these option actually end up providing `rclone` with AWS credentials then S3 interaction will be non-authenticated (see below). ### S3 Permissions ### -When using the `sync` subcommand of `rclone` the following minimum +When using the `sync` subcommand of `rclone` the following minimum permissions are required to be available on the bucket being written to: * `ListBucket` @@ -5283,10 +5660,10 @@ Notes on above: 1. This is a policy that can be used when creating bucket. It assumes that `USER_NAME` has been created. -2. The Resource entry must include both resource ARNs, as one implies +2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects. -For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) +For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with `rclone sync`. ### Key Management System (KMS) ### @@ -5327,45 +5704,38 @@ Available options include: - STANDARD - default storage class - STANDARD_IA - for less frequently accessed data (e.g backups) + - ONEZONE_IA - for storing data in only one Availability Zone - REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy) +#### --s3-chunk-size=SIZE #### + +Any files larger than this will be uploaded in chunks of this +size. The default is 5MB. The minimum is 5MB. + +Note that 2 chunks of this size are buffered in memory per transfer. + +If you are transferring large files over high speed links and you have +enough memory, then increasing this will speed up the transfers. + ### Anonymous access to public buckets ### If you want to use rclone to access a public bucket, configure with a -blank `access_key_id` and `secret_access_key`. Eg +blank `access_key_id` and `secret_access_key`. Your config should end +up looking like this: ``` -No remotes found - make a new one -n) New remote -q) Quit config -n/q> n -name> anons3 -What type of source is it? -Choose a number from below - 1) amazon cloud drive - 2) b2 - 3) drive - 4) dropbox - 5) google cloud storage - 6) swift - 7) hubic - 8) local - 9) onedrive -10) s3 -11) yandex -type> 10 -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own value - * Enter AWS credentials in the next step - 1) false - * Get AWS credentials from the environment (env vars or IAM) - 2) true -env_auth> 1 -AWS Access Key ID - leave blank for anonymous access or runtime credentials. -access_key_id> -AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. -secret_access_key> -... +[anons3] +type = s3 +provider = AWS +env_auth = false +access_key_id = +secret_access_key = +region = us-east-1 +endpoint = +location_constraint = +acl = private +server_side_encryption = +storage_class = ``` Then use it as normal with the name of the public bucket, eg @@ -5388,15 +5758,16 @@ your config: ``` [ceph] type = s3 +provider = Ceph env_auth = false access_key_id = XXX secret_access_key = YYY -region = +region = endpoint = https://ceph.endpoint.example.com -location_constraint = -acl = -server_side_encryption = -storage_class = +location_constraint = +acl = +server_side_encryption = +storage_class = ``` Note also that Ceph sometimes puts `/` in the passwords it gives @@ -5435,6 +5806,8 @@ your config: ``` [dreamobjects] +type = s3 +provider = DreamHost env_auth = false access_key_id = your_access_key secret_access_key = your_secret_key @@ -5446,7 +5819,6 @@ server_side_encryption = storage_class = ``` - ### DigitalOcean Spaces ### [Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean. @@ -5462,11 +5834,11 @@ Storage> s3 env_auth> 1 access_key_id> YOUR_ACCESS_KEY secret_access_key> YOUR_SECRET_KEY -region> +region> endpoint> nyc3.digitaloceanspaces.com -location_constraint> -acl> -storage_class> +location_constraint> +acl> +storage_class> ``` The resulting configuration file should look like: @@ -5474,15 +5846,16 @@ The resulting configuration file should look like: ``` [spaces] type = s3 +provider = DigitalOcean env_auth = false access_key_id = YOUR_ACCESS_KEY secret_access_key = YOUR_SECRET_KEY -region = +region = endpoint = nyc3.digitaloceanspaces.com -location_constraint = -acl = -server_side_encryption = -storage_class = +location_constraint = +acl = +server_side_encryption = +storage_class = ``` Once configured, you can create a new Space and begin copying files. For example: @@ -5493,7 +5866,8 @@ rclone copy /path/to/files spaces:my-new-space ``` ### IBM COS (S3) ### -Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (https://www.ibm.com/cloud/object-storage) + +Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage) To configure access to IBM COS S3, follow the steps below: @@ -5509,30 +5883,41 @@ To configure access to IBM COS S3, follow the steps below: 2. Enter the name for the configuration ``` - name> IBM-COS-XREGION + name> ``` 3. Select "s3" storage. ``` - Type of storage to configure. - Choose a number from below, or type in your own value - 1 / Amazon Drive +Choose a number from below, or type in your own value + 1 / Alias for a existing remote + \ "alias" + 2 / Amazon Drive \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3)) + 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) \ "s3" - 3 / Backblaze B2 - Storage> 2 + 4 / Backblaze B2 + \ "b2" +[snip] + 23 / http Connection + \ "http" +Storage> 3 ``` -4. Select "Enter AWS credentials…" +4. Select IBM COS as the S3 Storage Provider. ``` - Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. - Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" - env_auth> 1 +Choose the S3 provider. +Choose a number from below, or type in your own value + 1 / Choose this option to configure Storage to AWS S3 + \ "AWS" + 2 / Choose this option to configure Storage to Ceph Systems + \ "Ceph" + 3 / Choose this option to configure Storage to Dreamhost + \ "Dreamhost" + 4 / Choose this option to the configure Storage to IBM COS S3 + \ "IBMCOS" + 5 / Choose this option to the configure Storage to Minio + \ "Minio" + Provider>4 ``` 5. Enter the Access Key and Secret. @@ -5543,138 +5928,96 @@ To configure access to IBM COS S3, follow the steps below: secret_access_key> <> ``` -6. Select "other-v4-signature" region. +6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address. ``` - Region to connect to. + Endpoint for IBM COS S3 API. + Specify if using an IBM COS On Premise. Choose a number from below, or type in your own value - / The default endpoint - a good choice if you are unsure. - 1 | US Region, Northern Virginia or Pacific Northwest. - | Leave location constraint empty. - \ "us-east-1" - / US East (Ohio) Region - 2 | Needs location constraint us-east-2. - \ "us-east-2" - / US West (Oregon) Region - …… - 15 | eg Ceph/Dreamhost - | set this and make sure you set the endpoint. - \ "other-v2-signature" - / If using an S3 clone that understands v4 signatures set this - 16 | and make sure you set the endpoint. - \ "other-v4-signature - region> 16 + 1 / US Cross Region Endpoint + \ "s3-api.us-geo.objectstorage.softlayer.net" + 2 / US Cross Region Dallas Endpoint + \ "s3-api.dal.us-geo.objectstorage.softlayer.net" + 3 / US Cross Region Washington DC Endpoint + \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" + 4 / US Cross Region San Jose Endpoint + \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" + 5 / US Cross Region Private Endpoint + \ "s3-api.us-geo.objectstorage.service.networklayer.com" + 6 / US Cross Region Dallas Private Endpoint + \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" + 7 / US Cross Region Washington DC Private Endpoint + \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" + 8 / US Cross Region San Jose Private Endpoint + \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" + 9 / US Region East Endpoint + \ "s3.us-east.objectstorage.softlayer.net" + 10 / US Region East Private Endpoint + \ "s3.us-east.objectstorage.service.networklayer.com" + 11 / US Region South Endpoint +[snip] + 34 / Toronto Single Site Private Endpoint + \ "s3.tor01.objectstorage.service.networklayer.com" + endpoint>1 ``` -7. Enter the endpoint FQDN. + +7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter ``` - Leave blank if using AWS to use the default endpoint for the region. - Specify if using an S3 clone such as Ceph. - endpoint> s3-api.us-geo.objectstorage.softlayer.net + 1 / US Cross Region Standard + \ "us-standard" + 2 / US Cross Region Vault + \ "us-vault" + 3 / US Cross Region Cold + \ "us-cold" + 4 / US Cross Region Flex + \ "us-flex" + 5 / US East Region Standard + \ "us-east-standard" + 6 / US East Region Vault + \ "us-east-vault" + 7 / US East Region Cold + \ "us-east-cold" + 8 / US East Region Flex + \ "us-east-flex" + 9 / US South Region Standard + \ "us-south-standard" + 10 / US South Region Vault + \ "us-south-vault" +[snip] + 32 / Toronto Flex + \ "tor01-flex" +location_constraint>1 ``` -8. Specify a IBM COS Location Constraint. - a. Currently, the only IBM COS values for LocationConstraint are: - us-standard / us-vault / us-cold / us-flex - us-east-standard / us-east-vault / us-east-cold / us-east-flex - us-south-standard / us-south-vault / us-south-cold / us-south-flex - eu-standard / eu-vault / eu-cold / eu-flex +9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. ``` - Location constraint - must be set to match the Region. Used when creating buckets only. - Choose a number from below, or type in your own value - 1 / Empty for US Region, Northern Virginia or Pacific Northwest. - \ "" - 2 / US East (Ohio) Region. - \ "us-east-2" - …… - location_constraint> us-standard +Canned ACL used when creating buckets and/or storing objects in S3. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS + \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS + \ "public-read" + 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS + \ "public-read-write" + 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS + \ "authenticated-read" +acl> 1 ``` -9. Specify a canned ACL. -``` - Canned ACL used when creating buckets and/or storing objects in S3. - For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl - Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). - \ "private" - 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. - \ "public-read" - / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - 3 | Granting this on a bucket is generally not recommended. - \ "public-read-write" - 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. - \ "authenticated-read" - / Object owner gets FULL_CONTROL. Bucket owner gets READ access. - 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - \ "bucket-owner-read" - / Both the object owner and the bucket owner get FULL_CONTROL over the object. - 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - \ "bucket-owner-full-control" - acl> 1 -``` -10. Set the SSE option to "None". +12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this ``` - Choose a number from below, or type in your own value - 1 / None - \ "" - 2 / AES256 - \ "AES256" - server_side_encryption> 1 -``` - -11. Set the storage class to "None" (IBM COS uses the LocationConstraint at the bucket level). -``` - The storage class to use when storing objects in S3. - Choose a number from below, or type in your own value - 1 / Default - \ "" - 2 / Standard storage class - \ "STANDARD" - 3 / Reduced redundancy storage class - \ "REDUCED_REDUNDANCY" - 4 / Standard Infrequent Access storage class - \ "STANDARD_IA" - storage_class> -``` - -12. Review the displayed configuration and accept to save the "remote" then quit. -``` - Remote config - -------------------- - [IBM-COS-XREGION] - env_auth = false - access_key_id = <> - secret_access_key = <> - region = other-v4-signature + [xxx] + type = s3 + Provider = IBMCOS + access_key_id = xxx + secret_access_key = yyy endpoint = s3-api.us-geo.objectstorage.softlayer.net location_constraint = us-standard acl = private - server_side_encryption = - storage_class = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - Remote config - Current remotes: - - Name Type - ==== ==== - IBM-COS-XREGION s3 - - e) Edit existing remote - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> q ``` - - 13. Execute rclone commands ``` 1) Create a bucket. @@ -5694,7 +6037,6 @@ To configure access to IBM COS S3, follow the steps below: rclone delete IBM-COS-XREGION:newbucket/file.txt ``` - ### Minio ### [Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. @@ -5745,6 +6087,8 @@ Which makes the config file look like this ``` [minio] +type = s3 +provider = Minio env_auth = false access_key_id = USWUXHGYZQYFYFFIT3RE secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 @@ -5812,21 +6156,21 @@ Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" [snip] -location_constraint> +location_constraint> Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" [snip] -acl> +acl> The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" -server_side_encryption> +server_side_encryption> The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default @@ -5837,7 +6181,7 @@ Choose a number from below, or type in your own value \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" -storage_class> +storage_class> Remote config -------------------- [wasabi] @@ -5846,10 +6190,10 @@ access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY region = us-east-1 endpoint = s3.wasabisys.com -location_constraint = -acl = -server_side_encryption = -storage_class = +location_constraint = +acl = +server_side_encryption = +storage_class = -------------------- y) Yes this is OK e) Edit this remote @@ -5861,15 +6205,17 @@ This will leave the config file looking like this. ``` [wasabi] +type = s3 +provider = Wasabi env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY -region = us-east-1 +region = endpoint = s3.wasabisys.com -location_constraint = -acl = -server_side_encryption = -storage_class = +location_constraint = +acl = +server_side_encryption = +storage_class = ``` Backblaze B2 @@ -6697,7 +7043,7 @@ Flag to clear all the cached data for this remote before. #### --cache-chunk-size=SIZE #### The size of a chunk (partial file data). Use lower numbers for slower -connections. +connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur. **Default**: 5M @@ -7386,7 +7732,7 @@ There are some file names such as `thumbs.db` which Dropbox can't store. There is a full list of them in the ["Ignored Files" section of this document](https://www.dropbox.com/en/help/145). Rclone will issue an error message `File name disallowed - not uploading` if it -attempt to upload one of those file names, but the sync won't fail. +attempts to upload one of those file names, but the sync won't fail. If you have more than 10,000 files in a directory then `rclone purge dropbox:dir` will return the error `Failed to purge: There are too @@ -7727,7 +8073,10 @@ are what rclone will use for authentication. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the `service_account_file` prompt and rclone won't use the browser based authentication -flow. +flow. If you'd rather stuff the contents of the credentials file into +the rclone config file, you can set `service_account_credentials` with +the actual contents of the file instead, or set the equivalent +environment variable. ### --fast-list ### @@ -7939,7 +8288,10 @@ actively logged-in users, for example build machines. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the `service_account_file` prompt during `rclone config` and rclone won't use the browser based -authentication flow. +authentication flow. If you'd rather stuff the contents of the +credentials file into the rclone config file, you can set +`service_account_credentials` with the actual contents of the file +instead, or set the equivalent environment variable. #### Use case - Google Apps/G-suite account and individual Drive #### @@ -8075,6 +8427,14 @@ If you wish to empty your trash you can use the `rclone cleanup remote:` command which will permanently delete all your trashed files. This command does not take any path arguments. +### Quota information ### + +To view your current quota you can use the `rclone about remote:` +command which will display your usage limit (quota), the usage in Google +Drive, the size of all files in the Trash and the space used by other +Google services such as Gmail. This command does not take any path +arguments. + ### Specific options ### Here are the command line options specific to this cloud storage @@ -8548,6 +8908,109 @@ The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. +Mega +----------------------------------------- + +[Mega](https://mega.nz/) is a cloud storage and file hosting service +known for its security feature where all files are encrypted locally +before they are uploaded. This prevents anyone (including employees of +Mega) from accessing the files without knowledge of the key used for +encryption. + +This is an rclone backend for Mega which supports the file transfer +features of Mega using the same client side encryption. + +Paths are specified as `remote:path` + +Paths may be as deep as required, eg `remote:directory/subdirectory`. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Alias for a existing remote + \ "alias" +[snip] +14 / Mega + \ "mega" +[snip] +23 / http Connection + \ "http" +Storage> mega +User name +user> you@example.com +Password. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Remote config +-------------------- +[remote] +type = mega +user = you@example.com +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +Once configured you can then use `rclone` like this, + +List directories in top level of your Mega + + rclone lsd remote: + +List all the files in your Mega + + rclone ls remote: + +To copy a local directory to an Mega directory called backup + + rclone copy /home/source remote:backup + +### Modified time and hashes ### + +Mega does not support modification times or hashes yet. + +### Duplicated files ### + +Mega can have two files with exactly the same name and path (unlike a +normal file system). + +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. + +Use `rclone dedupe` to fix duplicated files. + +### Limitations ### + +This backend uses the [go-mega go +library](https://github.com/t3rm1n4l/go-mega) which is an opensource +go library implementing the Mega API. There doesn't appear to be any +documentation for the mega protocol beyond the [mega C++ +SDK](https://github.com/meganz/sdk) source code so there are likely +quite a few errors still remaining in this library. + +Mega allows duplicate files which may confuse rclone. + Microsoft Azure Blob Storage ----------------------------------------- @@ -8832,8 +9295,11 @@ OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. -One drive supports SHA1 type hashes, so you can use `--checksum` flag. +OneDrive personal supports SHA1 type hashes. OneDrive for business and +Sharepoint Server support +[QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). +For all types of OneDrive you can use the `--checksum` flag. ### Deleting files ### @@ -9282,6 +9748,19 @@ This remote supports `--fast-list` which allows you to use fewer transactions in exchange for more memory. See the [rclone docs](/docs/#fast-list) for more details. +### --update and --use-server-modtime ### + +As noted below, the modified time is stored on metadata on the object. It is +used by default for all operations that require checking the time a file was +last updated. It allows rclone to treat the remote more like a true filesystem, +but it is inefficient because it requires an extra API call to retrieve the +metadata. + +For many operations, the time the object was last uploaded to the remote is +sufficient to determine if it is "dirty". By using `--update` along with +`--use-server-modtime`, you can avoid the extra API call and simply upload +files whose local modtime is newer than the time it was last uploaded. + ### Specific options ### Here are the command line options specific to this cloud storage @@ -9462,13 +9941,16 @@ SFTP SFTP is the [Secure (or SSH) File Transfer Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). -It runs over SSH v2 and is standard with most modern SSH -installations. +SFTP runs over SSH v2 and is installed as standard with most modern +SSH installations. Paths are specified as `remote:path`. If the path does not begin with a `/` it is relative to the home directory of the user. An empty path `remote:` refers to the user's home directory. +Note that some SFTP servers will need the leading `/` - Synology is a +good example of this. + Here is an example of making an SFTP configuration. First run rclone config @@ -9619,10 +10101,15 @@ your RClone backend configuration to disable this behaviour. SFTP supports checksums if the same login has shell access and `md5sum` or `sha1sum` as well as `echo` are in the remote's PATH. -This remote check can be disabled by setting the configuration option -`disable_hashcheck`. This may be required if you're connecting to SFTP servers +This remote checksumming (file hashing) is recommended and enabled by default. +Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands -is prohibited. +is prohibited. Set the configuration option `disable_hashcheck` to `true` to +disable checksumming. + +Note that some SFTP servers (eg Synology) the paths are different for +SSH and SFTP so the hashes can't be calculated properly. For them +using `disable_hashcheck` is a good idea. The only ssh agent supported under Windows is Putty's pageant. @@ -9719,7 +10206,9 @@ Choose a number from below, or type in your own value \ "nextcloud" 2 / Owncloud \ "owncloud" - 3 / Other site/service or software + 3 / Sharepoint + \ "sharepoint" + 4 / Other site/service or software \ "other" vendor> 1 User name @@ -9809,6 +10298,47 @@ mount. For more help see [the put.io webdav docs](http://help.put.io/apps-and-integrations/ftp-and-webdav). +## Sharepoint ## + +Can be used with Sharepoint provided by OneDrive for Business +or Office365 Education Accounts. +This feature is only needed for a few of these Accounts, +mostly Office365 Education ones. These accounts are sometimes not +verified by the domain owner [github#1975](https://github.com/ncw/rclone/issues/1975) + +This means that these accounts can't be added using the official +API (other Accounts should work with the "onedrive" option). However, +it is possible to access them using webdav. + +To use a sharepoint remote with rclone, add it like this: +First, you need to get your remote's URL: + +- Go [here](https://onedrive.live.com/about/en-us/signin/) + to open your OneDrive or to sign in +- Now take a look at your address bar, the URL should look like this: + `https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx` + +You'll only need this URL upto the email address. After that, you'll +most likely want to add "/Documents". That subdirectory contains +the actual data stored on your OneDrive. + +Add the remote to rclone like this: +Configure the `url` as `https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents` +and use your normal account email and password for `user` and `pass`. +If you have 2FA enabled, you have to generate an app password. +Set the `vendor` to `sharepoint`. + +Your config file should look like this: + +``` +[sharepoint] +type = webdav +url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents +vendor = other +user = YourEmailAddress +pass = encryptedpassword +``` + Yandex Disk ---------------------------------------- @@ -10049,6 +10579,18 @@ $ rclone -L ls /tmp/a 6 b/one ``` +#### --local-no-check-updated #### + +Don't check to see if the files change during upload. + +Normally rclone checks the size and modification time of files as they +are being uploaded and aborts with a message which starts `can't copy +- source file is being updated` if the file changes during upload. + +However on some file systems this modification time check may fail (eg +[Glusterfs #2206](https://github.com/ncw/rclone/issues/2206)) so this +check can be disabled with this flag. + #### --local-no-unicode-normalization #### This flag is deprecated now. Rclone no longer normalizes unicode file @@ -10104,6 +10646,98 @@ points, as you explicitly acknowledge that they should be skipped. Changelog --------- + * v1.41 - 2018-04-28 + * New backends + * Mega support added + * Webdav now supports SharePoint cookie authentication (hensur) + * New commands + * link: create public link to files and folders (Stefan Breunig) + * about: gets quota info from a remote (a-roussos, ncw) + * hashsum: a generic tool for any hash to produce md5sum like output + * New Features + * lsd: Add -R flag and fix and update docs for all ls commands + * ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb) + * serve restic: Add append-only mode (Steve Kriss) + * serve restic: Disallow overwriting files in append-only mode (Alexander Neumann) + * serve restic: Print actual listener address (Matt Holt) + * size: Add --json flag (Matthew Holt) + * sync: implement --ignore-errors (Mateusz Pabian) + * dedupe: Add dedupe largest functionality (Richard Yang) + * fs: Extend SizeSuffix to include TB and PB for rclone about + * fs: add --dump goroutines and --dump openfiles for debugging + * rc: implement core/memstats to print internal memory usage info + * rc: new call rc/pid (Michael P. Dubner) + * Compile + * Drop support for go1.6 + * Release + * Fix `make tarball` (Chih-Hsuan Yen) + * Bug Fixes + * filter: fix --min-age and --max-age together check + * fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport + * lsd,lsf: make sure all times we output are in local time + * rc: fix setting bwlimit to unlimited + * rc: take note of the --rc-addr flag too as per the docs + * Mount + * Use About to return the correct disk total/used/free (eg in `df`) + * Set `--attr-timeout default` to `1s` - fixes: + * rclone using too much memory + * rclone not serving files to samba + * excessive time listing directories + * Fix `df -i` (upstream fix) + * VFS + * Filter files `.` and `..` from directory listing + * Only make the VFS cache if --vfs-cache-mode > Off + * Local + * Add --local-no-check-updated to disable updated file checks + * Retry remove on Windows sharing violation error + * Cache + * Flush the memory cache after close + * Purge file data on notification + * Always forget parent dir for notifications + * Integrate with Plex websocket + * Add rc cache/stats (seuffert) + * Add info log on notification + * Box + * Fix failure reading large directories - parse file/directory size as float + * Dropbox + * Fix crypt+obfuscate on dropbox + * Fix repeatedly uploading the same files + * FTP + * Work around strange response from box FTP server + * More workarounds for FTP servers to fix mkParentDir error + * Fix no error on listing non-existent directory + * Google Cloud Storage + * Add service_account_credentials (Matt Holt) + * Detect bucket presence by listing it - minimises permissions needed + * Ignore zero length directory markers + * Google Drive + * Add service_account_credentials (Matt Holt) + * Fix directory move leaving a hardlinked directory behind + * Return proper google errors when Opening files + * When initialized with a filepath, optional features used incorrect root path (Stefan Breunig) + * HTTP + * Fix sync for servers which don't return Content-Length in HEAD + * Onedrive + * Add QuickXorHash support for OneDrive for business + * Fix socket leak in multipart session upload + * S3 + * Look in S3 named profile files for credentials + * Add `--s3-disable-checksum` to disable checksum uploading (Chris Redekop) + * Hierarchical configuration support (Giri Badanahatti) + * Add in config for all the supported S3 providers + * Add One Zone Infrequent Access storage class (Craig Rachel) + * Add --use-server-modtime support (Peter Baumgartner) + * Add --s3-chunk-size option to control multipart uploads + * Ignore zero length directory markers + * SFTP + * Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll) + * Update docs with Synology quirks + * Fail soft with a debug on hash failure + * Swift + * Add --use-server-modtime support (Peter Baumgartner) + * Webdav + * Support SharePoint cookie authentication (hensur) + * Strip leading and trailing / off root * v1.40 - 2018-03-19 * New backends * Alias backend to create aliases for existing remote names (Fabian Möller) @@ -11402,7 +12036,7 @@ Contributors * John Papandriopoulos * Zhiming Wang * Andy Pilate - * Oliver Heyme + * Oliver Heyme * wuyu * Andrei Dragomir * Christian Brüggemann @@ -11436,7 +12070,7 @@ Contributors * Jon Fautley * lewapm <32110057+lewapm@users.noreply.github.com> * Yassine Imounachen - * Chris Redekop + * Chris Redekop * Jon Fautley * Will Gunn * Lucas Bremgartner @@ -11454,6 +12088,24 @@ Contributors * wolfv * Dave Pedu * Stefan Lindblom + * seuffert + * gbadanahatti <37121690+gbadanahatti@users.noreply.github.com> + * Keith Goldfarb + * Steve Kriss + * Chih-Hsuan Yen + * Alexander Neumann + * Matt Holt + * Eri Bastos + * Michael P. Dubner + * Antoine GIRARD + * Mateusz Piotrowski + * Animosity022 + * Peter Baumgartner + * Craig Rachel + * Michael G. Noll + * hensur + * Oliver Heyme + * Richard Yang # Contact the rclone project # diff --git a/MANUAL.txt b/MANUAL.txt index b340e2a26..b93084745 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Mar 19, 2018 +Apr 28, 2018 @@ -27,6 +27,7 @@ from: - Hubic - IBM COS S3 - Memset Memstore +- Mega - Microsoft Azure Blob Storage - Microsoft OneDrive - Minio @@ -34,7 +35,7 @@ from: - OVH - Openstack Swift - Oracle Cloud Storage -- Ownloud +- ownCloud - pCloud - put.io - QingStor @@ -209,6 +210,7 @@ See the following for detailed instructions for - Google Drive - HTTP - Hubic +- Mega - Microsoft Azure Blob Storage - Microsoft OneDrive - Openstack Swift / Rackspace Cloudfiles / Memset Memstore @@ -487,6 +489,14 @@ Synopsis Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. +Eg + + $ rclone ls swift:bucket + 60295 bevajer5jef + 90613 canole + 94467 diwogej7 + 37600 fubuwic + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -500,11 +510,15 @@ There are several related list commands ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. -Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to -stop the recursion. +Note that ls and lsl recurse by default - use "--max-depth 1" to stop +the recursion. -The other list commands lsf,lsjson do not recurse by default - use "-R" -to make them recurse. +The other list commands lsd,lsf,lsjson do not recurse by default - use +"-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - the +bucket based remotes). rclone ls remote:path [flags] @@ -519,8 +533,26 @@ List all directories/containers/buckets in the path. Synopsis -Lists the directories in the source path to standard output. Recurses by -default. +Lists the directories in the source path to standard output. Does not +recurse by default. Use the -R flag to recurse. + +This command lists the total size of the directory (if known, -1 if +not), the modification time (if known, the current time if not), the +number of objects in the directory (if known, -1 if not) and the name of +the directory, Eg + + $ rclone lsd swift: + 494000 2018-04-26 08:43:20 10000 10000files + 65 2018-04-26 08:43:20 1 1File + +Or + + $ rclone lsd drive:test + -1 2016-10-17 17:41:53 -1 1000files + -1 2017-01-03 14:40:54 -1 2500files + -1 2017-07-08 14:39:28 -1 4000files + +If you just want the directory names use "rclone lsf --dirs-only". Any of the filtering options can be applied to this commmand. @@ -535,17 +567,22 @@ There are several related list commands ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. -Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to -stop the recursion. +Note that ls and lsl recurse by default - use "--max-depth 1" to stop +the recursion. -The other list commands lsf,lsjson do not recurse by default - use "-R" -to make them recurse. +The other list commands lsd,lsf,lsjson do not recurse by default - use +"-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - the +bucket based remotes). rclone lsd remote:path [flags] Options - -h, --help help for lsd + -h, --help help for lsd + -R, --recursive Recurse into the listing. rclone lsl @@ -558,6 +595,14 @@ Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. +Eg + + $ rclone lsl swift:bucket + 60295 2016-06-25 18:55:41.062626927 bevajer5jef + 90613 2016-06-25 18:55:43.302607074 canole + 94467 2016-06-25 18:55:43.046609333 diwogej7 + 37600 2016-06-25 18:55:40.814629136 fubuwic + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -571,11 +616,15 @@ There are several related list commands ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. -Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to -stop the recursion. +Note that ls and lsl recurse by default - use "--max-depth 1" to stop +the recursion. -The other list commands lsf,lsjson do not recurse by default - use "-R" -to make them recurse. +The other list commands lsd,lsf,lsjson do not recurse by default - use +"-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - the +bucket based remotes). rclone lsl remote:path [flags] @@ -629,6 +678,7 @@ Prints the total size and number of objects in remote:path. Options -h, --help help for size + --json format output as JSON rclone version @@ -742,6 +792,8 @@ using an extra parameter with the same value one. - --dedupe-mode oldest - removes identical files then keeps the oldest one. +- --dedupe-mode largest - removes identical files then keeps the + largest one. - --dedupe-mode rename - removes identical files then renames the rest to be different. @@ -762,6 +814,63 @@ Options -h, --help help for dedupe +rclone about + +Get quota information from the remote. + +Synopsis + +Get quota information from the remote, like bytes used/free/quota and +bytes used in the trash. Not supported by all remotes. + +This will print to stdout something like this: + + Total: 17G + Used: 7.444G + Free: 1.315G + Trashed: 100.000M + Other: 8.241G + +Where the fields are: + +- Total: total size available. +- Used: total size used +- Free: total amount this user could upload. +- Trashed: total amount in the trash +- Other: total amount in other storage (eg Gmail, Google Photos) +- Objects: total number of objects in the storage + +Note that not all the backends provide all the fields - they will be +missing if they are not known for that backend. Where it is known that +the value is unlimited the value will also be omitted. + +Use the --full flag to see the numbers written out in full, eg + + Total: 18253611008 + Used: 7993453766 + Free: 1411001220 + Trashed: 104857602 + Other: 8849156022 + +Use the --json flag for a computer readable output, eg + + { + "total": 18253611008, + "used": 7993453766, + "trashed": 104857602, + "other": 8849156022, + "free": 1411001220 + } + + rclone about remote: [flags] + +Options + + --full Full numbers instead of SI units + -h, --help help for about + --json Format output as JSON + + rclone authorize Remote authorization. @@ -1185,6 +1294,60 @@ Options -h, --help help for gendocs +rclone hashsum + +Produces an hashsum file for all the objects in the path. + +Synopsis + +Produces a hash file for all the objects in the path using the hash +named. The output is in the same format as the standard md5sum/sha1sum +tool. + +Run without a hash to see the list of supported hashes, eg + + $ rclone hashsum + Supported hashes are: + * MD5 + * SHA-1 + * DropboxHash + * QuickXorHash + +Then + + $ rclone hashsum MD5 remote:path + + rclone hashsum remote:path [flags] + +Options + + -h, --help help for hashsum + + +rclone link + +Generate public link to file/folder. + +Synopsis + +rclone link will create or retrieve a public link to the given file or +folder. + + rclone link remote:path/to/file + rclone link remote:path/to/folder/ + +If successful, the last line of the output will contain the link. Exact +capabilities depend on the remote, but the link will always be created +with the least constraints – e.g. no expiry, no password protection, +accessible without account. + + rclone link remote:path [flags] + +Options + + -h, --help help for link + + rclone listremotes List all the remotes in the config file. @@ -1214,6 +1377,15 @@ standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. +Eg + + $ rclone lsf swift:bucket + bevajer5jef + canole + diwogej7 + ferejej3gux/ + fubuwic + Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: @@ -1225,6 +1397,15 @@ just the path, but you can use these parameters to control the output: So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. +Eg + + $ rclone lsf --format "tsp" swift:bucket + 2016-06-25 18:55:41;60295;bevajer5jef + 2016-06-25 18:55:43;90613;canole + 2016-06-25 18:55:43;94467;diwogej7 + 2018-04-26 08:50:45;0;ferejej3gux/ + 2016-06-25 18:55:40;37600;fubuwic + If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and @@ -1235,12 +1416,30 @@ For example to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . +Eg + + $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket + 7908e352297f0f530b84a756f188baa3 bevajer5jef + cd65ac234e6fea5925974a51cdd865cc canole + 03b5341b4f234b9d984d03ad076bae91 diwogej7 + 8fd37c3810dd660778137ac3a66cc06d fubuwic + 99713e14a4c4ff553acaf1930fad985b gixacuh7ku + (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. +Eg + + $ rclone lsf --separator "," --format "tshp" swift:bucket + 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef + 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole + 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 + 2018-04-26 08:52:53,0,,ferejej3gux/ + 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -1254,11 +1453,15 @@ There are several related list commands ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. -Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to -stop the recursion. +Note that ls and lsl recurse by default - use "--max-depth 1" to stop +the recursion. -The other list commands lsf,lsjson do not recurse by default - use "-R" -to make them recurse. +The other list commands lsd,lsf,lsjson do not recurse by default - use +"-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - the +bucket based remotes). rclone lsf remote:path [flags] @@ -1321,11 +1524,15 @@ There are several related list commands ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable. -Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to -stop the recursion. +Note that ls and lsl recurse by default - use "--max-depth 1" to stop +the recursion. -The other list commands lsf,lsjson do not recurse by default - use "-R" -to make them recurse. +The other list commands lsd,lsf,lsjson do not recurse by default - use +"-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - the +bucket based remotes). rclone lsjson remote:path [flags] @@ -1428,12 +1635,28 @@ Attribute caching You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. -The default is 0s - no caching - which is recommended for filesystems -which can change outside the control of the kernel. +The default is "1s" which caches files just long enough to avoid too +many callbacks to rclone from the kernel. -If you set it higher ('1s' or '1m' say) then the kernel will call back -to rclone less often making it more efficient, however there may be -strange effects when files change on the remote. +In theory 0s should be the correct value for filesystems which can +change outside the control of the kernel. However this causes quite a +few problems such as rclone using too much memory, rclone not serving +files to samba and excessive time listing directories. + +The kernel can cache the info about a file for the time given by +"--attr-timeout". You may see corruption if the remote file changes +length during this window. It will show up as either a truncated file or +a file with garbage on the end. With "--attr-timeout 1s" this is very +unlikely but not impossible. The higher you set "--attr-timeout" the +more likely it is. The default setting of "1s" is the lowest setting +which mitigates the problems above. + +If you set it higher ('10s' or '1m' say) then the kernel will call back +to rclone less often making it more efficient, however there is more +chance of the corruption issue above. + +If files don't change on the remote outside of the control of rclone +then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. @@ -1568,7 +1791,7 @@ Options --allow-non-empty Allow mounting over a non-empty directory. --allow-other Allow access to other users. --allow-root Allow access to root user. - --attr-timeout duration Time for which file/directory attributes are cached. + --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --daemon Run mount as a daemon (background mode). --debug-fuse Debug the FUSE internals - needs -v. --default-permissions Makes kernel enforce access control based on the file mode. @@ -1655,6 +1878,7 @@ Here are the keys - press '?' to toggle the help on and off c toggle counts g toggle graph n,s,C sort by name,size,count + ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit @@ -1809,10 +2033,11 @@ Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By -default it only listens on localhost. +default it only listens on localhost. You can use port :0 to let the OS +choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a @@ -2075,10 +2300,11 @@ Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By -default it only listens on localhost. +default it only listens on localhost. You can use port :0 to let the OS +choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a @@ -2125,6 +2351,7 @@ authority certificate. Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --append-only disallow deletion of repository data --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic @@ -2156,10 +2383,11 @@ Server options Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By -default it only listens on localhost. +default it only listens on localhost. You can use port :0 to let the OS +choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a @@ -2539,8 +2767,9 @@ and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Options which use SIZE use kByte by default. However, a suffix of b for -bytes, k for kBytes, M for MBytes and G for GBytes may be used. These -are the binary units, eg 1, 2**10, 2**20, 2**30 respectively. +bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for +PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, +2**30 respectively. --backup-dir=DIR @@ -2786,6 +3015,10 @@ Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info. +Note that if you are using the logrotate program to manage rclone's +logs, then you should use the copytruncate option as rclone doesn't have +a signal to rotate logs. + --log-level LEVEL This sets the log level for rclone. The default log level is NOTICE. @@ -3105,6 +3338,20 @@ This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum. +--use-server-modtime + +Some object-store backends (e.g, Swift, S3) do not preserve file +modification times (modtime). On these backends, rclone stores the +original modtime as additional metadata on the object. By default it +will make an API call to retrieve the metadata when the modtime is +needed by an operation. + +Use this flag to disable the extra API call and rely instead on the +server's modified time. In cases such as a local to remote sync, knowing +the local file is newer than the time it was last uploaded to the remote +is sufficient. In those cases, this flag can speed up the process and +reduce the number of API calls necessary. + -v, -vv, --verbose With -v rclone will tell you about each file that is transferred and a @@ -3249,6 +3496,16 @@ be very verbose. Useful for debugging only. Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on. +--dump goroutines + +This dumps a list of the running go-routines at the end of the command +to standard output. + +--dump openfiles + +This dumps a list of the open files at the end of the command. It uses +the lsof command to do that so you'll need that installed to use it. + --memprofile=FILE Write memory profile to file. This can be analysed with go tool pprof. @@ -4045,23 +4302,66 @@ control commands. Supported commands +cache/expire: Purge a remote from cache + +Purge a remote from the cache backend. Supports either a directory or a +file. Params: - remote = path to remote (required) - withData = +true/false to delete cached data (chunks) as well (optional) + +Eg + + rclone rc cache/expire remote=path/to/sub/folder/ + rclone rc cache/expire remote=/ withData=true + +cache/stats: Get cache stats + +Show statistics for the cache remote. + core/bwlimit: Set the bandwidth limit. This sets the bandwidth limit to that passed in. Eg - rclone core/bwlimit rate=1M - rclone core/bwlimit rate=off + rclone rc core/bwlimit rate=1M + rclone rc core/bwlimit rate=off -cache/expire: Purge a remote from cache +The format of the parameter is exactly the same as passed to --bwlimit +except only one bandwidth may be specified. -Purge a remote from the cache backend. Supports either a directory or a -file. Params: +core/memstats: Returns the memory statistics -- remote = path to remote (required) -- withData = true/false to delete cached data (chunks) as well - (optional) +This returns the memory statistics of the running program. What the +values mean are explained in the go docs: +https://golang.org/pkg/runtime/#MemStats + +The most interesting values for most people are: + +- HeapAlloc: This is the amount of memory rclone is actually using +- HeapSys: This is the amount of memory rclone has obtained from the + OS +- Sys: this is the total amount of memory requested from the OS +- It is virtual memory so may include unused memory + +core/pid: Return PID of current process + +This returns PID of current process. Useful for stopping rclone process. + +rc/error: This returns an error + +This returns an error with the input as part of its error string. Useful +for testing error handling. + +rc/list: List all the registered remote control commands + +This lists all the registered remote control commands as a JSON map in +the commands response. + +rc/noop: Echo the input to the output parameters + +This echoes the input parameters to the output parameters for testing +purposes. It can be used to check that rclone is still alive and to +check that parameter passing is working properly. vfs/forget: Forget files or directories in the directory cache. @@ -4079,22 +4379,6 @@ will forget that dir, eg rclone rc vfs/forget file=hello file2=goodbye dir=home/junk -rc/noop: Echo the input to the output parameters - -This echoes the input parameters to the output parameters for testing -purposes. It can be used to check that rclone is still alive and to -check that parameter passing is working properly. - -rc/error: This returns an error - -This returns an error with the input as part of its error string. Useful -for testing error handling. - -rc/list: List all the registered remote control commands - -This lists all the registered remote control commands as a JSON map in -the commands response. - Accessing the remote control via HTTP @@ -4218,8 +4502,9 @@ Here is an overview of the major features of each cloud storage system. Google Drive MD5 Yes No Yes R/W HTTP - No No No R Hubic MD5 Yes No No R/W + Mega - No No Yes - Microsoft Azure Blob Storage MD5 Yes No No R/W - Microsoft OneDrive SHA1 Yes Yes No R + Microsoft OneDrive SHA1 ‡‡ Yes Yes No R Openstack Swift MD5 Yes No No R/W pCloud MD5, SHA1 Yes No No W QingStor MD5 No No No R/W @@ -4246,6 +4531,9 @@ or sha1sum as well as echo are in the remote's PATH. †† WebDAV supports modtimes when used with Owncloud and Nextcloud only. +‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive +for business and SharePoint server support Microsoft's own QuickXorHash. + ModTime The cloud storage system supports setting modification times on objects. @@ -4309,27 +4597,28 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. - Name Purge Copy Move DirMove CleanUp ListR StreamUpload - ------------------------------ ------- ------ ------ --------- --------- ------- -------------- - Amazon Drive Yes No Yes Yes No #575 No No - Amazon S3 No Yes No No No Yes Yes - Backblaze B2 No No No No Yes Yes Yes - Box Yes Yes Yes Yes No #575 No Yes - Dropbox Yes Yes Yes Yes No #575 No Yes - FTP No No Yes Yes No No Yes - Google Cloud Storage Yes Yes No No No Yes Yes - Google Drive Yes Yes Yes Yes Yes No Yes - HTTP No No No No No No No - Hubic Yes † Yes No No No Yes Yes - Microsoft Azure Blob Storage Yes Yes No No No Yes No - Microsoft OneDrive Yes Yes Yes No #197 No #575 No No - Openstack Swift Yes † Yes No No No Yes Yes - pCloud Yes Yes Yes Yes Yes No No - QingStor No Yes No No No Yes No - SFTP No No Yes Yes No No Yes - WebDAV Yes Yes Yes Yes No No Yes ‡ - Yandex Disk Yes No No No Yes Yes Yes - The local filesystem Yes No Yes Yes No No Yes + Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About + ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- ------- + Amazon Drive Yes No Yes Yes No #575 No No No #2178 No + Amazon S3 No Yes No No No Yes Yes No #2178 No + Backblaze B2 No No No No Yes Yes Yes No #2178 No + Box Yes Yes Yes Yes No #575 No Yes No #2178 No + Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes + FTP No No Yes Yes No No Yes No #2178 No + Google Cloud Storage Yes Yes No No No Yes Yes No #2178 No + Google Drive Yes Yes Yes Yes Yes No Yes Yes Yes + HTTP No No No No No No No No #2178 No + Hubic Yes † Yes No No No Yes Yes No #2178 Yes + Mega Yes No Yes Yes No No No No #2178 Yes + Microsoft Azure Blob Storage Yes Yes No No No Yes No No #2178 No + Microsoft OneDrive Yes Yes Yes No #197 No #575 No No No #2178 Yes + Openstack Swift Yes † Yes No No No Yes Yes No #2178 Yes + pCloud Yes Yes Yes Yes Yes No No No #2178 Yes + QingStor No Yes No No No Yes No No #2178 No + SFTP No No Yes Yes No No Yes No #2178 No + WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 No + Yandex Disk Yes No No No Yes Yes Yes No #2178 No + The local filesystem Yes No Yes Yes No No Yes No Yes Purge @@ -4386,6 +4675,19 @@ Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat. +LinkSharing + +Sets the necessary permissions on a file or folder and prints a link +that allows others to access them, even if they don't have an account on +the particular cloud provider. + +About + +This is used to fetch quota information from the remote, like bytes +used/free/quota and bytes used in the trash. + +If the server can't do About then rclone about will return an error. + Alias @@ -4718,11 +5020,44 @@ the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size. -Amazon S3 +Amazon S3 Storage Providers + +The S3 backend can be used with a number of different providers: + +- AWS S3 +- Ceph +- DigitalOcean Spaces +- Dreamhost +- IBM COS S3 +- Minio +- Wasabi Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir. +Once you have made a remote (see the provider specific section above) +you can use it like this: + +See all buckets + + rclone lsd remote: + +Make a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync /home/local/directory to the remote bucket, deleting any excess +files in the bucket. + + rclone sync /home/local/directory remote:bucket + + +AWS S3 + Here is an example of making an s3 configuration. First run rclone config @@ -4741,7 +5076,7 @@ This will guide you through an interactive setup process. \ "alias" 2 / Amazon Drive \ "amazon cloud drive" - 3 / Amazon S3 (also Dreamhost, Ceph, Minio) + 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) \ "s3" 4 / Backblaze B2 \ "b2" @@ -4749,6 +5084,25 @@ This will guide you through an interactive setup process. 23 / http Connection \ "http" Storage> s3 + Choose your S3 provider. + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + 2 / Ceph Object Storage + \ "Ceph" + 3 / Digital Ocean Spaces + \ "DigitalOcean" + 4 / Dreamhost DreamObjects + \ "Dreamhost" + 5 / IBM COS S3 + \ "IBMCOS" + 6 / Minio Object Storage + \ "Minio" + 7 / Wasabi Object Storage + \ "Wasabi" + 8 / Any other S3 compatible provider + \ "Other" + provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step @@ -4760,7 +5114,7 @@ This will guide you through an interactive setup process. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY - Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. + Region to connect to. Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia or Pacific Northwest. @@ -4805,13 +5159,9 @@ This will guide you through an interactive setup process. / South America (Sao Paulo) Region 14 | Needs location constraint sa-east-1. \ "sa-east-1" - / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. - 15 | Set this and make sure you set the endpoint. - \ "other-v2-signature" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. - Specify if using an S3 clone such as Ceph. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value @@ -4880,10 +5230,14 @@ This will guide you through an interactive setup process. \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" + 5 / One Zone Infrequent Access storage class + \ "ONEZONE_IA" storage_class> 1 Remote config -------------------- [remote] + type = s3 + provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY @@ -4897,26 +5251,7 @@ This will guide you through an interactive setup process. y) Yes this is OK e) Edit this remote d) Delete this remote - y/e/d> y - -This remote is called remote and can now be used like this - -See all buckets - - rclone lsd remote: - -Make a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync /home/local/directory to the remote bucket, deleting any excess -files in the bucket. - - rclone sync /home/local/directory remote:bucket + y/e/d> --fast-list @@ -4924,6 +5259,20 @@ This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. +--update and --use-server-modtime + +As noted below, the modified time is stored on metadata on the object. +It is used by default for all operations that require checking the time +a file was last updated. It allows rclone to treat the remote more like +a true filesystem, but it is inefficient because it requires an extra +API call to retrieve the metadata. + +For many operations, the time the object was last uploaded to the remote +is sufficient to determine if it is "dirty". By using --update along +with --use-server-modtime, you can avoid the extra API call and simply +upload files whose local modtime is newer than the time it was last +uploaded. + Modified time The modified time is stored as metadata on the object as @@ -4944,21 +5293,29 @@ will get an error, incorrect region, the bucket is not in 'XXX' region. Authentication -There are two ways to supply rclone with a set of AWS credentials. In -order of precedence: +There are a number of ways to supply rclone with a set of AWS +credentials, with and without using the environment. -- Directly in the rclone configuration file (as configured by - rclone config) -- set access_key_id and secret_access_key. session_token can be - optionally set when using AWS STS. -- Runtime configuration: -- set env_auth to true in the config file -- Exporting the following environment variables before running rclone +The different authentication methods are tried in this order: + +- Directly in the rclone configuration file (env_auth = false in the + config file): +- access_key_id and secret_access_key are required. +- session_token can be optionally set when using AWS STS. +- Runtime configuration (env_auth = true in the config file): +- Export the following environment variables before running rclone: - Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY - Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY - - Session Token: AWS_SESSION_TOKEN -- Running rclone in an ECS task with an IAM role -- Running rclone on an EC2 instance with an IAM role + - Session Token: AWS_SESSION_TOKEN (optional) +- Or, use a named profile: + - Profile files are standard files used by AWS CLI tools + - By default it will use the profile in your home directory (eg + ~/.aws/credentials on unix based systems) file and the "default" + profile, to change set these environment variables: + - AWS_SHARED_CREDENTIALS_FILE to control which file. + - AWS_PROFILE to control which profile to use. +- Or, run rclone in an ECS task with an IAM role (AWS only). +- Or, run rclone on an EC2 instance with an IAM role (AWS only). If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below). @@ -5046,45 +5403,38 @@ Available options include: - STANDARD - default storage class - STANDARD_IA - for less frequently accessed data (e.g backups) +- ONEZONE_IA - for storing data in only one Availability Zone - REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy) +--s3-chunk-size=SIZE + +Any files larger than this will be uploaded in chunks of this size. The +default is 5MB. The minimum is 5MB. + +Note that 2 chunks of this size are buffered in memory per transfer. + +If you are transferring large files over high speed links and you have +enough memory, then increasing this will speed up the transfers. + Anonymous access to public buckets If you want to use rclone to access a public bucket, configure with a -blank access_key_id and secret_access_key. Eg +blank access_key_id and secret_access_key. Your config should end up +looking like this: - No remotes found - make a new one - n) New remote - q) Quit config - n/q> n - name> anons3 - What type of source is it? - Choose a number from below - 1) amazon cloud drive - 2) b2 - 3) drive - 4) dropbox - 5) google cloud storage - 6) swift - 7) hubic - 8) local - 9) onedrive - 10) s3 - 11) yandex - type> 10 - Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. - Choose a number from below, or type in your own value - * Enter AWS credentials in the next step - 1) false - * Get AWS credentials from the environment (env vars or IAM) - 2) true - env_auth> 1 - AWS Access Key ID - leave blank for anonymous access or runtime credentials. - access_key_id> - AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. - secret_access_key> - ... + [anons3] + type = s3 + provider = AWS + env_auth = false + access_key_id = + secret_access_key = + region = us-east-1 + endpoint = + location_constraint = + acl = private + server_side_encryption = + storage_class = Then use it as normal with the name of the public bucket, eg @@ -5104,15 +5454,16 @@ config: [ceph] type = s3 + provider = Ceph env_auth = false access_key_id = XXX secret_access_key = YYY - region = + region = endpoint = https://ceph.endpoint.example.com - location_constraint = - acl = - server_side_encryption = - storage_class = + location_constraint = + acl = + server_side_encryption = + storage_class = Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get @@ -5146,6 +5497,8 @@ blank and set the endpoint. You should end up with something like this in your config: [dreamobjects] + type = s3 + provider = DreamHost env_auth = false access_key_id = your_access_key secret_access_key = your_secret_key @@ -5178,25 +5531,26 @@ rclone config, each prompt should be answered as shown below: env_auth> 1 access_key_id> YOUR_ACCESS_KEY secret_access_key> YOUR_SECRET_KEY - region> + region> endpoint> nyc3.digitaloceanspaces.com - location_constraint> - acl> - storage_class> + location_constraint> + acl> + storage_class> The resulting configuration file should look like: [spaces] type = s3 + provider = DigitalOcean env_auth = false access_key_id = YOUR_ACCESS_KEY secret_access_key = YOUR_SECRET_KEY - region = + region = endpoint = nyc3.digitaloceanspaces.com - location_constraint = - acl = - server_side_encryption = - storage_class = + location_constraint = + acl = + server_side_encryption = + storage_class = Once configured, you can create a new Space and begin copying files. For example: @@ -5211,7 +5565,7 @@ dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: -(https://www.ibm.com/cloud/object-storage) +(http://www.ibm.com/cloud/object-storage) To configure access to IBM COS S3, follow the steps below: @@ -5226,28 +5580,39 @@ To configure access to IBM COS S3, follow the steps below: 2. Enter the name for the configuration - name> IBM-COS-XREGION + name> 3. Select "s3" storage. - Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive + 1 / Alias for a existing remote + \ "alias" + 2 / Amazon Drive \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3)) + 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) \ "s3" - 3 / Backblaze B2 - Storage> 2 + 4 / Backblaze B2 + \ "b2" + [snip] + 23 / http Connection + \ "http" + Storage> 3 -4. Select "Enter AWS credentials…" +4. Select IBM COS as the S3 Storage Provider. - Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. + Choose the S3 provider. Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" - env_auth> 1 + 1 / Choose this option to configure Storage to AWS S3 + \ "AWS" + 2 / Choose this option to configure Storage to Ceph Systems + \ "Ceph" + 3 / Choose this option to configure Storage to Dreamhost + \ "Dreamhost" + 4 / Choose this option to the configure Storage to IBM COS S3 + \ "IBMCOS" + 5 / Choose this option to the configure Storage to Minio + \ "Minio" + Provider>4 5. Enter the Access Key and Secret. @@ -5256,132 +5621,97 @@ To configure access to IBM COS S3, follow the steps below: AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> <> -6. Select "other-v4-signature" region. +6. Specify the endpoint for IBM COS. For Public IBM COS, choose from + the option below. For On Premise IBM COS, enter an enpoint address. - Region to connect to. + Endpoint for IBM COS S3 API. + Specify if using an IBM COS On Premise. Choose a number from below, or type in your own value - / The default endpoint - a good choice if you are unsure. - 1 | US Region, Northern Virginia or Pacific Northwest. - | Leave location constraint empty. - \ "us-east-1" - / US East (Ohio) Region - 2 | Needs location constraint us-east-2. - \ "us-east-2" - / US West (Oregon) Region - …… - 15 | eg Ceph/Dreamhost - | set this and make sure you set the endpoint. - \ "other-v2-signature" - / If using an S3 clone that understands v4 signatures set this - 16 | and make sure you set the endpoint. - \ "other-v4-signature - region> 16 + 1 / US Cross Region Endpoint + \ "s3-api.us-geo.objectstorage.softlayer.net" + 2 / US Cross Region Dallas Endpoint + \ "s3-api.dal.us-geo.objectstorage.softlayer.net" + 3 / US Cross Region Washington DC Endpoint + \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" + 4 / US Cross Region San Jose Endpoint + \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" + 5 / US Cross Region Private Endpoint + \ "s3-api.us-geo.objectstorage.service.networklayer.com" + 6 / US Cross Region Dallas Private Endpoint + \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" + 7 / US Cross Region Washington DC Private Endpoint + \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" + 8 / US Cross Region San Jose Private Endpoint + \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" + 9 / US Region East Endpoint + \ "s3.us-east.objectstorage.softlayer.net" + 10 / US Region East Private Endpoint + \ "s3.us-east.objectstorage.service.networklayer.com" + 11 / US Region South Endpoint + [snip] + 34 / Toronto Single Site Private Endpoint + \ "s3.tor01.objectstorage.service.networklayer.com" + endpoint>1 -7. Enter the endpoint FQDN. +7. Specify a IBM COS Location Constraint. The location constraint must + match endpoint when using IBM Cloud Public. For on-prem COS, do not + make a selection from this list, hit enter - Leave blank if using AWS to use the default endpoint for the region. - Specify if using an S3 clone such as Ceph. - endpoint> s3-api.us-geo.objectstorage.softlayer.net + 1 / US Cross Region Standard + \ "us-standard" + 2 / US Cross Region Vault + \ "us-vault" + 3 / US Cross Region Cold + \ "us-cold" + 4 / US Cross Region Flex + \ "us-flex" + 5 / US East Region Standard + \ "us-east-standard" + 6 / US East Region Vault + \ "us-east-vault" + 7 / US East Region Cold + \ "us-east-cold" + 8 / US East Region Flex + \ "us-east-flex" + 9 / US South Region Standard + \ "us-south-standard" + 10 / US South Region Vault + \ "us-south-vault" + [snip] + 32 / Toronto Flex + \ "tor01-flex" + location_constraint>1 -8. Specify a IBM COS Location Constraint. - a. Currently, the only IBM COS values for LocationConstraint are: - us-standard / us-vault / us-cold / us-flex us-east-standard / - us-east-vault / us-east-cold / us-east-flex us-south-standard / - us-south-vault / us-south-cold / us-south-flex eu-standard / - eu-vault / eu-cold / eu-flex - - Location constraint - must be set to match the Region. Used when creating buckets only. - Choose a number from below, or type in your own value - 1 / Empty for US Region, Northern Virginia or Pacific Northwest. - \ "" - 2 / US East (Ohio) Region. - \ "us-east-2" - …… - location_constraint> us-standard - -9. Specify a canned ACL. +8. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" + and "private". IBM Cloud(Infra) supports all the canned ACLs. + On-Premise COS supports all the canned ACLs. Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). - \ "private" - 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. - \ "public-read" - / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. - 3 | Granting this on a bucket is generally not recommended. - \ "public-read-write" - 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. - \ "authenticated-read" - / Object owner gets FULL_CONTROL. Bucket owner gets READ access. - 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - \ "bucket-owner-read" - / Both the object owner and the bucket owner get FULL_CONTROL over the object. - 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - \ "bucket-owner-full-control" + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS + \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS + \ "public-read" + 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS + \ "public-read-write" + 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS + \ "authenticated-read" acl> 1 -10. Set the SSE option to "None". +9. Review the displayed configuration and accept to save the "remote" + then quit. The config file should look like this - Choose a number from below, or type in your own value - 1 / None - \ "" - 2 / AES256 - \ "AES256" - server_side_encryption> 1 - -11. Set the storage class to "None" (IBM COS uses the LocationConstraint - at the bucket level). - - The storage class to use when storing objects in S3. - Choose a number from below, or type in your own value - 1 / Default - \ "" - 2 / Standard storage class - \ "STANDARD" - 3 / Reduced redundancy storage class - \ "REDUCED_REDUNDANCY" - 4 / Standard Infrequent Access storage class - \ "STANDARD_IA" - storage_class> - -12. Review the displayed configuration and accept to save the "remote" - then quit. - - Remote config - -------------------- - [IBM-COS-XREGION] - env_auth = false - access_key_id = <> - secret_access_key = <> - region = other-v4-signature + [xxx] + type = s3 + Provider = IBMCOS + access_key_id = xxx + secret_access_key = yyy endpoint = s3-api.us-geo.objectstorage.softlayer.net location_constraint = us-standard acl = private - server_side_encryption = - storage_class = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - Remote config - Current remotes: - Name Type - ==== ==== - IBM-COS-XREGION s3 - - e) Edit existing remote - n) New remote - d) Delete remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - e/n/d/r/c/s/q> q - -13. Execute rclone commands +10. Execute rclone commands 1) Create a bucket. rclone mkdir IBM-COS-XREGION:newbucket @@ -5446,6 +5776,8 @@ important to put the region in as stated above. Which makes the config file look like this [minio] + type = s3 + provider = Minio env_auth = false access_key_id = USWUXHGYZQYFYFFIT3RE secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 @@ -5509,21 +5841,21 @@ rclone like this. 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" [snip] - location_constraint> + location_constraint> Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" [snip] - acl> + acl> The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256" - server_side_encryption> + server_side_encryption> The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default @@ -5534,7 +5866,7 @@ rclone like this. \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" - storage_class> + storage_class> Remote config -------------------- [wasabi] @@ -5543,10 +5875,10 @@ rclone like this. secret_access_key = YOURSECRETACCESSKEY region = us-east-1 endpoint = s3.wasabisys.com - location_constraint = - acl = - server_side_encryption = - storage_class = + location_constraint = + acl = + server_side_encryption = + storage_class = -------------------- y) Yes this is OK e) Edit this remote @@ -5556,15 +5888,17 @@ rclone like this. This will leave the config file looking like this. [wasabi] + type = s3 + provider = Wasabi env_auth = false access_key_id = YOURACCESSKEY secret_access_key = YOURSECRETACCESSKEY - region = us-east-1 + region = endpoint = s3.wasabisys.com - location_constraint = - acl = - server_side_encryption = - storage_class = + location_constraint = + acl = + server_side_encryption = + storage_class = Backblaze B2 @@ -6370,7 +6704,9 @@ DEFAULT: not set --cache-chunk-size=SIZE The size of a chunk (partial file data). Use lower numbers for slower -connections. +connections. If the chunk size is changed, any downloaded chunks will be +invalid and cache-chunk-path will need to be cleared or unexpected EOF +errors will occur. DEFAULT: 5M @@ -7041,7 +7377,7 @@ Note that Dropbox is case insensitive so you can't have a file called There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message -File name disallowed - not uploading if it attempt to upload one of +File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail. If you have more than 10,000 files in a directory then @@ -7374,7 +7710,10 @@ These credentials are what rclone will use for authentication. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and -rclone won't use the browser based authentication flow. +rclone won't use the browser based authentication flow. If you'd rather +stuff the contents of the credentials file into the rclone config file, +you can set service_account_credentials with the actual contents of the +file instead, or set the equivalent environment variable. --fast-list @@ -7582,7 +7921,10 @@ users, for example build machines. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt during rclone config and rclone won't use the browser based -authentication flow. +authentication flow. If you'd rather stuff the contents of the +credentials file into the rclone config file, you can set +service_account_credentials with the actual contents of the file +instead, or set the equivalent environment variable. Use case - Google Apps/G-suite account and individual Drive @@ -7713,6 +8055,13 @@ If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments. +Quota information + +To view your current quota you can use the rclone about remote: command +which will display your usage limit (quota), the usage in Google Drive, +the size of all files in the Trash and the space used by other Google +services such as Gmail. This command does not take any path arguments. + Specific options Here are the command line options specific to this cloud storage system. @@ -8178,6 +8527,104 @@ The Swift API doesn't return a correct MD5SUM for segmented files MD5SUM for these. +Mega + +Mega is a cloud storage and file hosting service known for its security +feature where all files are encrypted locally before they are uploaded. +This prevents anyone (including employees of Mega) from accessing the +files without knowledge of the key used for encryption. + +This is an rclone backend for Mega which supports the file transfer +features of Mega using the same client side encryption. + +Paths are specified as remote:path + +Paths may be as deep as required, eg remote:directory/subdirectory. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Alias for a existing remote + \ "alias" + [snip] + 14 / Mega + \ "mega" + [snip] + 23 / http Connection + \ "http" + Storage> mega + User name + user> you@example.com + Password. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank + y/g/n> y + Enter the password: + password: + Confirm the password: + password: + Remote config + -------------------- + [remote] + type = mega + user = you@example.com + pass = *** ENCRYPTED *** + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Once configured you can then use rclone like this, + +List directories in top level of your Mega + + rclone lsd remote: + +List all the files in your Mega + + rclone ls remote: + +To copy a local directory to an Mega directory called backup + + rclone copy /home/source remote:backup + +Modified time and hashes + +Mega does not support modification times or hashes yet. + +Duplicated files + +Mega can have two files with exactly the same name and path (unlike a +normal file system). + +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. + +Use rclone dedupe to fix duplicated files. + +Limitations + +This backend uses the go-mega go library which is an opensource go +library implementing the Mega API. There doesn't appear to be any +documentation for the mega protocol beyond the mega C++ SDK source code +so there are likely quite a few errors still remaining in this library. + +Mega allows duplicate files which may confuse rclone. + + Microsoft Azure Blob Storage Paths are specified as remote:container (or remote: for the lsd @@ -8457,7 +8904,10 @@ OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. -One drive supports SHA1 type hashes, so you can use --checksum flag. +OneDrive personal supports SHA1 type hashes. OneDrive for business and +Sharepoint Server support QuickXorHash. + +For all types of OneDrive you can use the --checksum flag. Deleting files @@ -8897,6 +9347,20 @@ This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. +--update and --use-server-modtime + +As noted below, the modified time is stored on metadata on the object. +It is used by default for all operations that require checking the time +a file was last updated. It allows rclone to treat the remote more like +a true filesystem, but it is inefficient because it requires an extra +API call to retrieve the metadata. + +For many operations, the time the object was last uploaded to the remote +is sufficient to determine if it is "dirty". By using --update along +with --use-server-modtime, you can avoid the extra API call and simply +upload files whose local modtime is newer than the time it was last +uploaded. + Specific options Here are the command line options specific to this cloud storage system. @@ -9072,12 +9536,16 @@ SFTP SFTP is the Secure (or SSH) File Transfer Protocol. -It runs over SSH v2 and is standard with most modern SSH installations. +SFTP runs over SSH v2 and is installed as standard with most modern SSH +installations. Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. +Note that some SFTP servers will need the leading / - Synology is a good +example of this. + Here is an example of making an SFTP configuration. First run rclone config @@ -9227,11 +9695,16 @@ behaviour. Limitations SFTP supports checksums if the same login has shell access and md5sum or -sha1sum as well as echo are in the remote's PATH. This remote check can -be disabled by setting the configuration option disable_hashcheck. This -may be required if you're connecting to SFTP servers which are not under -your control, and to which the execution of remote commands is -prohibited. +sha1sum as well as echo are in the remote's PATH. This remote +checksumming (file hashing) is recommended and enabled by default. +Disabling the checksumming may be required if you are connecting to SFTP +servers which are not under your control, and to which the execution of +remote commands is prohibited. Set the configuration option +disable_hashcheck to true to disable checksumming. + +Note that some SFTP servers (eg Synology) the paths are different for +SSH and SFTP so the hashes can't be calculated properly. For them using +disable_hashcheck is a good idea. The only ssh agent supported under Windows is Putty's pageant. @@ -9325,7 +9798,9 @@ This will guide you through an interactive setup process: \ "nextcloud" 2 / Owncloud \ "owncloud" - 3 / Other site/service or software + 3 / Sharepoint + \ "sharepoint" + 4 / Other site/service or software \ "other" vendor> 1 User name @@ -9410,6 +9885,44 @@ to signal to the OS that it can't write to the mount. For more help see the put.io webdav docs. +Sharepoint + +Can be used with Sharepoint provided by OneDrive for Business or +Office365 Education Accounts. This feature is only needed for a few of +these Accounts, mostly Office365 Education ones. These accounts are +sometimes not verified by the domain owner github#1975 + +This means that these accounts can't be added using the official API +(other Accounts should work with the "onedrive" option). However, it is +possible to access them using webdav. + +To use a sharepoint remote with rclone, add it like this: First, you +need to get your remote's URL: + +- Go here to open your OneDrive or to sign in +- Now take a look at your address bar, the URL should look like this: + https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx + +You'll only need this URL upto the email address. After that, you'll +most likely want to add "/Documents". That subdirectory contains the +actual data stored on your OneDrive. + +Add the remote to rclone like this: Configure the url as +https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents +and use your normal account email and password for user and pass. If you +have 2FA enabled, you have to generate an app password. Set the vendor +to sharepoint. + +Your config file should look like this: + + [sharepoint] + type = webdav + url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents + vendor = other + user = YourEmailAddress + pass = encryptedpassword + + Yandex Disk Yandex Disk is a cloud storage solution created by Yandex. @@ -9638,6 +10151,18 @@ and 6 b/two 6 b/one +--local-no-check-updated + +Don't check to see if the files change during upload. + +Normally rclone checks the size and modification time of files as they +are being uploaded and aborts with a message which starts +can't copy - source file is being updated if the file changes during +upload. + +However on some file systems this modification time check may fail (eg +Glusterfs #2206) so this check can be disabled with this flag. + --local-no-unicode-normalization This flag is deprecated now. Rclone no longer normalizes unicode file @@ -9686,6 +10211,105 @@ points, as you explicitly acknowledge that they should be skipped. Changelog +- v1.41 - 2018-04-28 + - New backends + - Mega support added + - Webdav now supports SharePoint cookie authentication (hensur) + - New commands + - link: create public link to files and folders (Stefan Breunig) + - about: gets quota info from a remote (a-roussos, ncw) + - hashsum: a generic tool for any hash to produce md5sum like + output + - New Features + - lsd: Add -R flag and fix and update docs for all ls commands + - ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb) + - serve restic: Add append-only mode (Steve Kriss) + - serve restic: Disallow overwriting files in append-only mode + (Alexander Neumann) + - serve restic: Print actual listener address (Matt Holt) + - size: Add --json flag (Matthew Holt) + - sync: implement --ignore-errors (Mateusz Pabian) + - dedupe: Add dedupe largest functionality (Richard Yang) + - fs: Extend SizeSuffix to include TB and PB for rclone about + - fs: add --dump goroutines and --dump openfiles for debugging + - rc: implement core/memstats to print internal memory usage info + - rc: new call rc/pid (Michael P. Dubner) + - Compile + - Drop support for go1.6 + - Release + - Fix make tarball (Chih-Hsuan Yen) + - Bug Fixes + - filter: fix --min-age and --max-age together check + - fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport + - lsd,lsf: make sure all times we output are in local time + - rc: fix setting bwlimit to unlimited + - rc: take note of the --rc-addr flag too as per the docs + - Mount + - Use About to return the correct disk total/used/free (eg in df) + - Set --attr-timeout default to 1s - fixes: + - rclone using too much memory + - rclone not serving files to samba + - excessive time listing directories + - Fix df -i (upstream fix) + - VFS + - Filter files . and .. from directory listing + - Only make the VFS cache if --vfs-cache-mode > Off + - Local + - Add --local-no-check-updated to disable updated file checks + - Retry remove on Windows sharing violation error + - Cache + - Flush the memory cache after close + - Purge file data on notification + - Always forget parent dir for notifications + - Integrate with Plex websocket + - Add rc cache/stats (seuffert) + - Add info log on notification + - Box + - Fix failure reading large directories - parse file/directory + size as float + - Dropbox + - Fix crypt+obfuscate on dropbox + - Fix repeatedly uploading the same files + - FTP + - Work around strange response from box FTP server + - More workarounds for FTP servers to fix mkParentDir error + - Fix no error on listing non-existent directory + - Google Cloud Storage + - Add service_account_credentials (Matt Holt) + - Detect bucket presence by listing it - minimises permissions + needed + - Ignore zero length directory markers + - Google Drive + - Add service_account_credentials (Matt Holt) + - Fix directory move leaving a hardlinked directory behind + - Return proper google errors when Opening files + - When initialized with a filepath, optional features used + incorrect root path (Stefan Breunig) + - HTTP + - Fix sync for servers which don't return Content-Length in HEAD + - Onedrive + - Add QuickXorHash support for OneDrive for business + - Fix socket leak in multipart session upload + - S3 + - Look in S3 named profile files for credentials + - Add --s3-disable-checksum to disable checksum uploading (Chris + Redekop) + - Hierarchical configuration support (Giri Badanahatti) + - Add in config for all the supported S3 providers + - Add One Zone Infrequent Access storage class (Craig Rachel) + - Add --use-server-modtime support (Peter Baumgartner) + - Add --s3-chunk-size option to control multipart uploads + - Ignore zero length directory markers + - SFTP + - Update docs to match code, fix typos and clarify + disable_hashcheck prompt (Michael G. Noll) + - Update docs with Synology quirks + - Fail soft with a debug on hash failure + - Swift + - Add --use-server-modtime support (Peter Baumgartner) + - Webdav + - Support SharePoint cookie authentication (hensur) + - Strip leading and trailing / off root - v1.40 - 2018-03-19 - New backends - Alias backend to create aliases for existing remote names @@ -11143,6 +11767,7 @@ Contributors - Zhiming Wang zmwangx@gmail.com - Andy Pilate cubox@cubox.me - Oliver Heyme olihey@googlemail.com olihey@users.noreply.github.com + de8olihe@lego.com - wuyu wuyu@yunify.com - Andrei Dragomir adragomi@adobe.com - Christian Brüggemann mail@cbruegg.com @@ -11177,6 +11802,7 @@ Contributors - lewapm 32110057+lewapm@users.noreply.github.com - Yassine Imounachen yassine256@gmail.com - Chris Redekop chris-redekop@users.noreply.github.com + chris.redekop@gmail.com - Jon Fautley jon@adenoid.appstal.co.uk - Will Gunn WillGunn@users.noreply.github.com - Lucas Bremgartner lucas@bremis.ch @@ -11194,6 +11820,24 @@ Contributors - wolfv wolfv6@users.noreply.github.com - Dave Pedu dave@davepedu.com - Stefan Lindblom lindblom@spotify.com +- seuffert oliver@seuffert.biz +- gbadanahatti 37121690+gbadanahatti@users.noreply.github.com +- Keith Goldfarb barkofdelight@gmail.com +- Steve Kriss steve@heptio.com +- Chih-Hsuan Yen yan12125@gmail.com +- Alexander Neumann fd0@users.noreply.github.com +- Matt Holt mholt@users.noreply.github.com +- Eri Bastos bastos.eri@gmail.com +- Michael P. Dubner pywebmail@list.ru +- Antoine GIRARD sapk@users.noreply.github.com +- Mateusz Piotrowski mpp302@gmail.com +- Animosity022 animosity22@users.noreply.github.com +- Peter Baumgartner pete@lincolnloop.com +- Craig Rachel craig@craigrachel.com +- Michael G. Noll miguno@users.noreply.github.com +- hensur me@hensur.de +- Oliver Heyme de8olihe@lego.com +- Richard Yang richard@yenforyang.com diff --git a/docs/content/changelog.md b/docs/content/changelog.md index ca7533640..50fcf1347 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -1,12 +1,104 @@ --- title: "Documentation" description: "Rclone Changelog" -date: "2018-03-19" +date: "2018-04-28" --- Changelog --------- + * v1.41 - 2018-04-28 + * New backends + * Mega support added + * Webdav now supports SharePoint cookie authentication (hensur) + * New commands + * link: create public link to files and folders (Stefan Breunig) + * about: gets quota info from a remote (a-roussos, ncw) + * hashsum: a generic tool for any hash to produce md5sum like output + * New Features + * lsd: Add -R flag and fix and update docs for all ls commands + * ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb) + * serve restic: Add append-only mode (Steve Kriss) + * serve restic: Disallow overwriting files in append-only mode (Alexander Neumann) + * serve restic: Print actual listener address (Matt Holt) + * size: Add --json flag (Matthew Holt) + * sync: implement --ignore-errors (Mateusz Pabian) + * dedupe: Add dedupe largest functionality (Richard Yang) + * fs: Extend SizeSuffix to include TB and PB for rclone about + * fs: add --dump goroutines and --dump openfiles for debugging + * rc: implement core/memstats to print internal memory usage info + * rc: new call rc/pid (Michael P. Dubner) + * Compile + * Drop support for go1.6 + * Release + * Fix `make tarball` (Chih-Hsuan Yen) + * Bug Fixes + * filter: fix --min-age and --max-age together check + * fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport + * lsd,lsf: make sure all times we output are in local time + * rc: fix setting bwlimit to unlimited + * rc: take note of the --rc-addr flag too as per the docs + * Mount + * Use About to return the correct disk total/used/free (eg in `df`) + * Set `--attr-timeout default` to `1s` - fixes: + * rclone using too much memory + * rclone not serving files to samba + * excessive time listing directories + * Fix `df -i` (upstream fix) + * VFS + * Filter files `.` and `..` from directory listing + * Only make the VFS cache if --vfs-cache-mode > Off + * Local + * Add --local-no-check-updated to disable updated file checks + * Retry remove on Windows sharing violation error + * Cache + * Flush the memory cache after close + * Purge file data on notification + * Always forget parent dir for notifications + * Integrate with Plex websocket + * Add rc cache/stats (seuffert) + * Add info log on notification + * Box + * Fix failure reading large directories - parse file/directory size as float + * Dropbox + * Fix crypt+obfuscate on dropbox + * Fix repeatedly uploading the same files + * FTP + * Work around strange response from box FTP server + * More workarounds for FTP servers to fix mkParentDir error + * Fix no error on listing non-existent directory + * Google Cloud Storage + * Add service_account_credentials (Matt Holt) + * Detect bucket presence by listing it - minimises permissions needed + * Ignore zero length directory markers + * Google Drive + * Add service_account_credentials (Matt Holt) + * Fix directory move leaving a hardlinked directory behind + * Return proper google errors when Opening files + * When initialized with a filepath, optional features used incorrect root path (Stefan Breunig) + * HTTP + * Fix sync for servers which don't return Content-Length in HEAD + * Onedrive + * Add QuickXorHash support for OneDrive for business + * Fix socket leak in multipart session upload + * S3 + * Look in S3 named profile files for credentials + * Add `--s3-disable-checksum` to disable checksum uploading (Chris Redekop) + * Hierarchical configuration support (Giri Badanahatti) + * Add in config for all the supported S3 providers + * Add One Zone Infrequent Access storage class (Craig Rachel) + * Add --use-server-modtime support (Peter Baumgartner) + * Add --s3-chunk-size option to control multipart uploads + * Ignore zero length directory markers + * SFTP + * Update docs to match code, fix typos and clarify disable_hashcheck prompt (Michael G. Noll) + * Update docs with Synology quirks + * Fail soft with a debug on hash failure + * Swift + * Add --use-server-modtime support (Peter Baumgartner) + * Webdav + * Support SharePoint cookie authentication (hensur) + * Strip leading and trailing / off root * v1.40 - 2018-03-19 * New backends * Alias backend to create aliases for existing remote names (Fabian Möller) diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index 36d30a518..b4ff61600 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -1,12 +1,12 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone" slug: rclone url: /commands/rclone/ --- ## rclone -Sync files and directories to and from local and remote object stores - v1.40 +Sync files and directories to and from local and remote object stores - v1.41 ### Synopsis @@ -24,6 +24,7 @@ from various cloud storage systems and using file transfer services, such as: * Google Drive * HTTP * Hubic + * Mega * Microsoft Azure Blob Storage * Microsoft OneDrive * Openstack Swift / Rackspace cloud files / Memset Memstore @@ -114,7 +115,7 @@ rclone [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -128,23 +129,26 @@ rclone [flags] --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). -h, --help help for rclone --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -167,7 +171,9 @@ rclone [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -186,13 +192,15 @@ rclone [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) -V, --version Print the version number ``` ### SEE ALSO +* [rclone about](/commands/rclone_about/) - Get quota information from the remote. * [rclone authorize](/commands/rclone_authorize/) - Remote authorization. * [rclone cachestats](/commands/rclone_cachestats/) - Print cache stats for a remote * [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout. @@ -208,6 +216,8 @@ rclone [flags] * [rclone delete](/commands/rclone_delete/) - Remove the contents of path. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied. +* [rclone hashsum](/commands/rclone_hashsum/) - Produces an hashsum file for all the objects in the path. +* [rclone link](/commands/rclone_link/) - Generate public link to file/folder. * [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file. * [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path. * [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path. @@ -234,4 +244,4 @@ rclone [flags] * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone version](/commands/rclone_version/) - Show the version number. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_about.md b/docs/content/commands/rclone_about.md new file mode 100644 index 000000000..d1f665b4e --- /dev/null +++ b/docs/content/commands/rclone_about.md @@ -0,0 +1,214 @@ +--- +date: 2018-04-28T11:44:58+01:00 +title: "rclone about" +slug: rclone_about +url: /commands/rclone_about/ +--- +## rclone about + +Get quota information from the remote. + +### Synopsis + + +Get quota information from the remote, like bytes used/free/quota and bytes +used in the trash. Not supported by all remotes. + +This will print to stdout something like this: + + Total: 17G + Used: 7.444G + Free: 1.315G + Trashed: 100.000M + Other: 8.241G + +Where the fields are: + + * Total: total size available. + * Used: total size used + * Free: total amount this user could upload. + * Trashed: total amount in the trash + * Other: total amount in other storage (eg Gmail, Google Photos) + * Objects: total number of objects in the storage + +Note that not all the backends provide all the fields - they will be +missing if they are not known for that backend. Where it is known +that the value is unlimited the value will also be omitted. + +Use the --full flag to see the numbers written out in full, eg + + Total: 18253611008 + Used: 7993453766 + Free: 1411001220 + Trashed: 104857602 + Other: 8849156022 + +Use the --json flag for a computer readable output, eg + + { + "total": 18253611008, + "used": 7993453766, + "trashed": 104857602, + "other": 8849156022, + "free": 1411001220 + } + + +``` +rclone about remote: [flags] +``` + +### Options + +``` + --full Full numbers instead of SI units + -h, --help help for about + --json Format output as JSON +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming + --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") + --cache-chunk-size string The size of a chunk (default "5M") + --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") + --cache-db-purge Purge the cache DB before + --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") + --cache-info-age string How much time should object info be stored in cache (default "6h") + --cache-read-retries int How many times to retry a read from a cache storage (default 10) + --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage + --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") + --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") + --cache-workers int How many workers should run in parallel to download chunks (default 4) + --cache-writes Will cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-impersonate string Impersonate this user when using a service account. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-trashed-only Only show files that are in the trash + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use created date instead of modified date. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) + -n, --dry-run Do a trial run with no permanent changes + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). + --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. + --memprofile string Write memory profile to file + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Obsolete - does nothing. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) + --sftp-ask-password Allow asking for SFTP password when needed. + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") + -v, --verbose count Print lots more stuff (repeat for more) +``` + +### SEE ALSO + +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 + +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md index e0e822cd6..34891a91c 100644 --- a/docs/content/commands/rclone_authorize.md +++ b/docs/content/commands/rclone_authorize.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone authorize" slug: rclone_authorize url: /commands/rclone_authorize/ @@ -85,7 +85,7 @@ rclone authorize [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -98,23 +98,26 @@ rclone authorize [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -137,7 +140,9 @@ rclone authorize [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -156,12 +161,13 @@ rclone authorize [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_cachestats.md b/docs/content/commands/rclone_cachestats.md index 2960aacd5..d08e8b2ec 100644 --- a/docs/content/commands/rclone_cachestats.md +++ b/docs/content/commands/rclone_cachestats.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone cachestats" slug: rclone_cachestats url: /commands/rclone_cachestats/ @@ -84,7 +84,7 @@ rclone cachestats source: [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -97,23 +97,26 @@ rclone cachestats source: [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -136,7 +139,9 @@ rclone cachestats source: [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -155,12 +160,13 @@ rclone cachestats source: [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md index e72590000..070118cb1 100644 --- a/docs/content/commands/rclone_cat.md +++ b/docs/content/commands/rclone_cat.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone cat" slug: rclone_cat url: /commands/rclone_cat/ @@ -106,7 +106,7 @@ rclone cat remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -119,23 +119,26 @@ rclone cat remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -158,7 +161,9 @@ rclone cat remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -177,12 +182,13 @@ rclone cat remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md index f4bc2bc53..e1c24a94d 100644 --- a/docs/content/commands/rclone_check.md +++ b/docs/content/commands/rclone_check.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone check" slug: rclone_check url: /commands/rclone_check/ @@ -95,7 +95,7 @@ rclone check source:path dest:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -108,23 +108,26 @@ rclone check source:path dest:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -147,7 +150,9 @@ rclone check source:path dest:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -166,12 +171,13 @@ rclone check source:path dest:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md index 3653be1f3..2a56af280 100644 --- a/docs/content/commands/rclone_cleanup.md +++ b/docs/content/commands/rclone_cleanup.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone cleanup" slug: rclone_cleanup url: /commands/rclone_cleanup/ @@ -85,7 +85,7 @@ rclone cleanup remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -98,23 +98,26 @@ rclone cleanup remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -137,7 +140,9 @@ rclone cleanup remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -156,12 +161,13 @@ rclone cleanup remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md index 1a5b9932d..69f601207 100644 --- a/docs/content/commands/rclone_config.md +++ b/docs/content/commands/rclone_config.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config" slug: rclone_config url: /commands/rclone_config/ @@ -85,7 +85,7 @@ rclone config [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -98,23 +98,26 @@ rclone config [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -137,7 +140,9 @@ rclone config [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -156,13 +161,14 @@ rclone config [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 * [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote . * [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON. @@ -173,4 +179,4 @@ rclone config [flags] * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md index e8ec0edac..ffc1b6e67 100644 --- a/docs/content/commands/rclone_config_create.md +++ b/docs/content/commands/rclone_config_create.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config create" slug: rclone_config_create url: /commands/rclone_config_create/ @@ -90,7 +90,7 @@ rclone config create [ ]* [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -103,23 +103,26 @@ rclone config create [ ]* [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -142,7 +145,9 @@ rclone config create [ ]* [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -161,7 +166,8 @@ rclone config create [ ]* [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -169,4 +175,4 @@ rclone config create [ ]* [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_delete.md b/docs/content/commands/rclone_config_delete.md index 2d0719dd0..7ba2f5eb8 100644 --- a/docs/content/commands/rclone_config_delete.md +++ b/docs/content/commands/rclone_config_delete.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config delete" slug: rclone_config_delete url: /commands/rclone_config_delete/ @@ -82,7 +82,7 @@ rclone config delete [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone config delete [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone config delete [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,7 +158,8 @@ rclone config delete [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -161,4 +167,4 @@ rclone config delete [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_dump.md b/docs/content/commands/rclone_config_dump.md index 160d006a6..4a78cf4ee 100644 --- a/docs/content/commands/rclone_config_dump.md +++ b/docs/content/commands/rclone_config_dump.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config dump" slug: rclone_config_dump url: /commands/rclone_config_dump/ @@ -82,7 +82,7 @@ rclone config dump [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone config dump [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone config dump [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,7 +158,8 @@ rclone config dump [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -161,4 +167,4 @@ rclone config dump [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_edit.md b/docs/content/commands/rclone_config_edit.md index 9c47362fa..b49b36d5c 100644 --- a/docs/content/commands/rclone_config_edit.md +++ b/docs/content/commands/rclone_config_edit.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config edit" slug: rclone_config_edit url: /commands/rclone_config_edit/ @@ -85,7 +85,7 @@ rclone config edit [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -98,23 +98,26 @@ rclone config edit [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -137,7 +140,9 @@ rclone config edit [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -156,7 +161,8 @@ rclone config edit [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -164,4 +170,4 @@ rclone config edit [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_file.md b/docs/content/commands/rclone_config_file.md index f56433151..c3f9b9b35 100644 --- a/docs/content/commands/rclone_config_file.md +++ b/docs/content/commands/rclone_config_file.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config file" slug: rclone_config_file url: /commands/rclone_config_file/ @@ -82,7 +82,7 @@ rclone config file [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone config file [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone config file [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,7 +158,8 @@ rclone config file [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -161,4 +167,4 @@ rclone config file [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_password.md b/docs/content/commands/rclone_config_password.md index 3fb51f5df..c6084fe2b 100644 --- a/docs/content/commands/rclone_config_password.md +++ b/docs/content/commands/rclone_config_password.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config password" slug: rclone_config_password url: /commands/rclone_config_password/ @@ -89,7 +89,7 @@ rclone config password [ ]+ [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -102,23 +102,26 @@ rclone config password [ ]+ [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -141,7 +144,9 @@ rclone config password [ ]+ [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -160,7 +165,8 @@ rclone config password [ ]+ [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -168,4 +174,4 @@ rclone config password [ ]+ [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_providers.md b/docs/content/commands/rclone_config_providers.md index f60d042ff..114991dda 100644 --- a/docs/content/commands/rclone_config_providers.md +++ b/docs/content/commands/rclone_config_providers.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config providers" slug: rclone_config_providers url: /commands/rclone_config_providers/ @@ -82,7 +82,7 @@ rclone config providers [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone config providers [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone config providers [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,7 +158,8 @@ rclone config providers [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -161,4 +167,4 @@ rclone config providers [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_show.md b/docs/content/commands/rclone_config_show.md index 913daaa6d..3cc974db6 100644 --- a/docs/content/commands/rclone_config_show.md +++ b/docs/content/commands/rclone_config_show.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config show" slug: rclone_config_show url: /commands/rclone_config_show/ @@ -82,7 +82,7 @@ rclone config show [] [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone config show [] [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone config show [] [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,7 +158,8 @@ rclone config show [] [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -161,4 +167,4 @@ rclone config show [] [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md index fb15b8900..22408a5cf 100644 --- a/docs/content/commands/rclone_config_update.md +++ b/docs/content/commands/rclone_config_update.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone config update" slug: rclone_config_update url: /commands/rclone_config_update/ @@ -89,7 +89,7 @@ rclone config update [ ]+ [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -102,23 +102,26 @@ rclone config update [ ]+ [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -141,7 +144,9 @@ rclone config update [ ]+ [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -160,7 +165,8 @@ rclone config update [ ]+ [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -168,4 +174,4 @@ rclone config update [ ]+ [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index 2a65eb031..ce1ebfe22 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone copy" slug: rclone_copy url: /commands/rclone_copy/ @@ -118,7 +118,7 @@ rclone copy source:path dest:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -131,23 +131,26 @@ rclone copy source:path dest:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -170,7 +173,9 @@ rclone copy source:path dest:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -189,12 +194,13 @@ rclone copy source:path dest:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index 276ff178d..0672842f1 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone copyto" slug: rclone_copyto url: /commands/rclone_copyto/ @@ -108,7 +108,7 @@ rclone copyto source:path dest:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -121,23 +121,26 @@ rclone copyto source:path dest:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -160,7 +163,9 @@ rclone copyto source:path dest:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -179,12 +184,13 @@ rclone copyto source:path dest:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md index f79529668..c3e3718ea 100644 --- a/docs/content/commands/rclone_cryptcheck.md +++ b/docs/content/commands/rclone_cryptcheck.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone cryptcheck" slug: rclone_cryptcheck url: /commands/rclone_cryptcheck/ @@ -105,7 +105,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -118,23 +118,26 @@ rclone cryptcheck remote:path cryptedremote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -157,7 +160,9 @@ rclone cryptcheck remote:path cryptedremote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -176,12 +181,13 @@ rclone cryptcheck remote:path cryptedremote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md index 18ba88e50..4f679f8d7 100644 --- a/docs/content/commands/rclone_cryptdecode.md +++ b/docs/content/commands/rclone_cryptdecode.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone cryptdecode" slug: rclone_cryptdecode url: /commands/rclone_cryptdecode/ @@ -94,7 +94,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -107,23 +107,26 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -146,7 +149,9 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -165,12 +170,13 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md index 458404400..2c07d5f8a 100644 --- a/docs/content/commands/rclone_dbhashsum.md +++ b/docs/content/commands/rclone_dbhashsum.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone dbhashsum" slug: rclone_dbhashsum url: /commands/rclone_dbhashsum/ @@ -87,7 +87,7 @@ rclone dbhashsum remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -100,23 +100,26 @@ rclone dbhashsum remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -139,7 +142,9 @@ rclone dbhashsum remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -158,12 +163,13 @@ rclone dbhashsum remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md index 71cf4e2b0..6c0f39be8 100644 --- a/docs/content/commands/rclone_dedupe.md +++ b/docs/content/commands/rclone_dedupe.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone dedupe" slug: rclone_dedupe url: /commands/rclone_dedupe/ @@ -80,6 +80,7 @@ Dedupe can be run non interactively using the `--dedupe-mode` flag or by using a * `--dedupe-mode first` - removes identical files then keeps the first one. * `--dedupe-mode newest` - removes identical files then keeps the newest one. * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. + * `--dedupe-mode largest` - removes identical files then keeps the largest one. * `--dedupe-mode rename` - removes identical files then renames the rest to be different. For example to rename all the identically named photos in your Google Photos directory, do @@ -162,7 +163,7 @@ rclone dedupe [mode] remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -175,23 +176,26 @@ rclone dedupe [mode] remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -214,7 +218,9 @@ rclone dedupe [mode] remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -233,12 +239,13 @@ rclone dedupe [mode] remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md index 7a84c38d4..caddf364b 100644 --- a/docs/content/commands/rclone_delete.md +++ b/docs/content/commands/rclone_delete.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone delete" slug: rclone_delete url: /commands/rclone_delete/ @@ -99,7 +99,7 @@ rclone delete remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -112,23 +112,26 @@ rclone delete remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -151,7 +154,9 @@ rclone delete remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -170,12 +175,13 @@ rclone delete remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md index df89c23ce..842792e15 100644 --- a/docs/content/commands/rclone_genautocomplete.md +++ b/docs/content/commands/rclone_genautocomplete.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone genautocomplete" slug: rclone_genautocomplete url: /commands/rclone_genautocomplete/ @@ -81,7 +81,7 @@ Run with --help to list the supported shells. --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -94,23 +94,26 @@ Run with --help to list the supported shells. --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -133,7 +136,9 @@ Run with --help to list the supported shells. --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -152,14 +157,15 @@ Run with --help to list the supported shells. --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 * [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_genautocomplete_bash.md b/docs/content/commands/rclone_genautocomplete_bash.md index 50f56f00b..9c087dbeb 100644 --- a/docs/content/commands/rclone_genautocomplete_bash.md +++ b/docs/content/commands/rclone_genautocomplete_bash.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone genautocomplete bash" slug: rclone_genautocomplete_bash url: /commands/rclone_genautocomplete_bash/ @@ -97,7 +97,7 @@ rclone genautocomplete bash [output_file] [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -110,23 +110,26 @@ rclone genautocomplete bash [output_file] [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -149,7 +152,9 @@ rclone genautocomplete bash [output_file] [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -168,7 +173,8 @@ rclone genautocomplete bash [output_file] [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -176,4 +182,4 @@ rclone genautocomplete bash [output_file] [flags] * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_genautocomplete_zsh.md b/docs/content/commands/rclone_genautocomplete_zsh.md index b39a5bae0..9e5a4582a 100644 --- a/docs/content/commands/rclone_genautocomplete_zsh.md +++ b/docs/content/commands/rclone_genautocomplete_zsh.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone genautocomplete zsh" slug: rclone_genautocomplete_zsh url: /commands/rclone_genautocomplete_zsh/ @@ -97,7 +97,7 @@ rclone genautocomplete zsh [output_file] [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -110,23 +110,26 @@ rclone genautocomplete zsh [output_file] [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -149,7 +152,9 @@ rclone genautocomplete zsh [output_file] [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -168,7 +173,8 @@ rclone genautocomplete zsh [output_file] [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -176,4 +182,4 @@ rclone genautocomplete zsh [output_file] [flags] * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md index 446be4537..73a7e619b 100644 --- a/docs/content/commands/rclone_gendocs.md +++ b/docs/content/commands/rclone_gendocs.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone gendocs" slug: rclone_gendocs url: /commands/rclone_gendocs/ @@ -85,7 +85,7 @@ rclone gendocs output_directory [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -98,23 +98,26 @@ rclone gendocs output_directory [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -137,7 +140,9 @@ rclone gendocs output_directory [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -156,12 +161,13 @@ rclone gendocs output_directory [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md new file mode 100644 index 000000000..4632e0ace --- /dev/null +++ b/docs/content/commands/rclone_hashsum.md @@ -0,0 +1,187 @@ +--- +date: 2018-04-28T11:44:58+01:00 +title: "rclone hashsum" +slug: rclone_hashsum +url: /commands/rclone_hashsum/ +--- +## rclone hashsum + +Produces an hashsum file for all the objects in the path. + +### Synopsis + + +Produces a hash file for all the objects in the path using the hash +named. The output is in the same format as the standard +md5sum/sha1sum tool. + +Run without a hash to see the list of supported hashes, eg + + $ rclone hashsum + Supported hashes are: + * MD5 + * SHA-1 + * DropboxHash + * QuickXorHash + +Then + + $ rclone hashsum MD5 remote:path + + +``` +rclone hashsum remote:path [flags] +``` + +### Options + +``` + -h, --help help for hashsum +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming + --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") + --cache-chunk-size string The size of a chunk (default "5M") + --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") + --cache-db-purge Purge the cache DB before + --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") + --cache-info-age string How much time should object info be stored in cache (default "6h") + --cache-read-retries int How many times to retry a read from a cache storage (default 10) + --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage + --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") + --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") + --cache-workers int How many workers should run in parallel to download chunks (default 4) + --cache-writes Will cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-impersonate string Impersonate this user when using a service account. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-trashed-only Only show files that are in the trash + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use created date instead of modified date. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) + -n, --dry-run Do a trial run with no permanent changes + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). + --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. + --memprofile string Write memory profile to file + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Obsolete - does nothing. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) + --sftp-ask-password Allow asking for SFTP password when needed. + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") + -v, --verbose count Print lots more stuff (repeat for more) +``` + +### SEE ALSO + +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 + +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_link.md b/docs/content/commands/rclone_link.md new file mode 100644 index 000000000..b03e746ff --- /dev/null +++ b/docs/content/commands/rclone_link.md @@ -0,0 +1,180 @@ +--- +date: 2018-04-28T11:44:58+01:00 +title: "rclone link" +slug: rclone_link +url: /commands/rclone_link/ +--- +## rclone link + +Generate public link to file/folder. + +### Synopsis + + +rclone link will create or retrieve a public link to the given file or folder. + + rclone link remote:path/to/file + rclone link remote:path/to/folder/ + +If successful, the last line of the output will contain the link. Exact +capabilities depend on the remote, but the link will always be created with +the least constraints – e.g. no expiry, no password protection, accessible +without account. + + +``` +rclone link remote:path [flags] +``` + +### Options + +``` + -h, --help help for link +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming + --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") + --cache-chunk-size string The size of a chunk (default "5M") + --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") + --cache-db-purge Purge the cache DB before + --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") + --cache-info-age string How much time should object info be stored in cache (default "6h") + --cache-read-retries int How many times to retry a read from a cache storage (default 10) + --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage + --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") + --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") + --cache-workers int How many workers should run in parallel to download chunks (default 4) + --cache-writes Will cache file data on writes through the FS + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-impersonate string Impersonate this user when using a service account. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-trashed-only Only show files that are in the trash + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use created date instead of modified date. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) + -n, --dry-run Do a trial run with no permanent changes + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP bodies - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). + --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. + --memprofile string Write memory profile to file + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Obsolete - does nothing. + --no-update-modtime Don't update destination mod-time if files identical. + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) + --sftp-ask-password Allow asking for SFTP password when needed. + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") + -v, --verbose count Print lots more stuff (repeat for more) +``` + +### SEE ALSO + +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 + +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index ec0013a9b..3b41f993d 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone listremotes" slug: rclone_listremotes url: /commands/rclone_listremotes/ @@ -87,7 +87,7 @@ rclone listremotes [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -100,23 +100,26 @@ rclone listremotes [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -139,7 +142,9 @@ rclone listremotes [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -158,12 +163,13 @@ rclone listremotes [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md index d8d43346e..2c1f5d06f 100644 --- a/docs/content/commands/rclone_ls.md +++ b/docs/content/commands/rclone_ls.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone ls" slug: rclone_ls url: /commands/rclone_ls/ @@ -14,6 +14,15 @@ List the objects in the path with size and path. Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. +Eg + + $ rclone ls swift:bucket + 60295 bevajer5jef + 90613 canole + 94467 diwogej7 + 37600 fubuwic + + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -28,9 +37,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -103,7 +116,7 @@ rclone ls remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -116,23 +129,26 @@ rclone ls remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -155,7 +171,9 @@ rclone ls remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -174,12 +192,13 @@ rclone ls remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md index 5582d5967..0bba2cf9d 100644 --- a/docs/content/commands/rclone_lsd.md +++ b/docs/content/commands/rclone_lsd.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone lsd" slug: rclone_lsd url: /commands/rclone_lsd/ @@ -11,8 +11,27 @@ List all directories/containers/buckets in the path. ### Synopsis -Lists the directories in the source path to standard output. Recurses -by default. +Lists the directories in the source path to standard output. Does not +recurse by default. Use the -R flag to recurse. + +This command lists the total size of the directory (if known, -1 if +not), the modification time (if known, the current time if not), the +number of objects in the directory (if known, -1 if not) and the name +of the directory, Eg + + $ rclone lsd swift: + 494000 2018-04-26 08:43:20 10000 10000files + 65 2018-04-26 08:43:20 1 1File + +Or + + $ rclone lsd drive:test + -1 2016-10-17 17:41:53 -1 1000files + -1 2017-01-03 14:40:54 -1 2500files + -1 2017-07-08 14:39:28 -1 4000files + +If you just want the directory names use "rclone lsf --dirs-only". + Any of the filtering options can be applied to this commmand. @@ -28,9 +47,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -40,7 +63,8 @@ rclone lsd remote:path [flags] ### Options ``` - -h, --help help for lsd + -h, --help help for lsd + -R, --recursive Recurse into the listing. ``` ### Options inherited from parent commands @@ -103,7 +127,7 @@ rclone lsd remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -116,23 +140,26 @@ rclone lsd remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -155,7 +182,9 @@ rclone lsd remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -174,12 +203,13 @@ rclone lsd remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md index b331c8ffc..bf9401f15 100644 --- a/docs/content/commands/rclone_lsf.md +++ b/docs/content/commands/rclone_lsf.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone lsf" slug: rclone_lsf url: /commands/rclone_lsf/ @@ -16,6 +16,15 @@ standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. +Eg + + $ rclone lsf swift:bucket + bevajer5jef + canole + diwogej7 + ferejej3gux/ + fubuwic + Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: @@ -28,6 +37,15 @@ output: So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. +Eg + + $ rclone lsf --format "tsp" swift:bucket + 2016-06-25 18:55:41;60295;bevajer5jef + 2016-06-25 18:55:43;90613;canole + 2016-06-25 18:55:43;94467;diwogej7 + 2018-04-26 08:50:45;0;ferejej3gux/ + 2016-06-25 18:55:40;37600;fubuwic + If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object @@ -39,12 +57,31 @@ For example to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . +Eg + + $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket + 7908e352297f0f530b84a756f188baa3 bevajer5jef + cd65ac234e6fea5925974a51cdd865cc canole + 03b5341b4f234b9d984d03ad076bae91 diwogej7 + 8fd37c3810dd660778137ac3a66cc06d fubuwic + 99713e14a4c4ff553acaf1930fad985b gixacuh7ku + (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. +Eg + + $ rclone lsf --separator "," --format "tshp" swift:bucket + 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef + 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole + 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 + 2018-04-26 08:52:53,0,,ferejej3gux/ + 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic + + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -59,9 +96,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -141,7 +182,7 @@ rclone lsf remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -154,23 +195,26 @@ rclone lsf remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -193,7 +237,9 @@ rclone lsf remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -212,12 +258,13 @@ rclone lsf remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md index 83ed405b2..ad1aa7ee1 100644 --- a/docs/content/commands/rclone_lsjson.md +++ b/docs/content/commands/rclone_lsjson.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone lsjson" slug: rclone_lsjson url: /commands/rclone_lsjson/ @@ -58,9 +58,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -137,7 +141,7 @@ rclone lsjson remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -150,23 +154,26 @@ rclone lsjson remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -189,7 +196,9 @@ rclone lsjson remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -208,12 +217,13 @@ rclone lsjson remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md index 2d10b8cb2..e689dcca4 100644 --- a/docs/content/commands/rclone_lsl.md +++ b/docs/content/commands/rclone_lsl.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone lsl" slug: rclone_lsl url: /commands/rclone_lsl/ @@ -14,6 +14,15 @@ List the objects in path with modification time, size and path. Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. +Eg + + $ rclone lsl swift:bucket + 60295 2016-06-25 18:55:41.062626927 bevajer5jef + 90613 2016-06-25 18:55:43.302607074 canole + 94467 2016-06-25 18:55:43.046609333 diwogej7 + 37600 2016-06-25 18:55:40.814629136 fubuwic + + Any of the filtering options can be applied to this commmand. There are several related list commands @@ -28,9 +37,13 @@ There are several related list commands `lsf` is designed to be human and machine readable. `lsjson` is designed to be machine readable. -Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion. +Note that `ls` and `lsl` recurse by default - use "--max-depth 1" to stop the recursion. -The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse. + +Listing a non existent directory will produce an error except for +remotes which can't have empty directories (eg s3, swift, gcs, etc - +the bucket based remotes). ``` @@ -103,7 +116,7 @@ rclone lsl remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -116,23 +129,26 @@ rclone lsl remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -155,7 +171,9 @@ rclone lsl remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -174,12 +192,13 @@ rclone lsl remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md index a8633ec54..dcd7dbb31 100644 --- a/docs/content/commands/rclone_md5sum.md +++ b/docs/content/commands/rclone_md5sum.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone md5sum" slug: rclone_md5sum url: /commands/rclone_md5sum/ @@ -85,7 +85,7 @@ rclone md5sum remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -98,23 +98,26 @@ rclone md5sum remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -137,7 +140,9 @@ rclone md5sum remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -156,12 +161,13 @@ rclone md5sum remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md index 9a584b934..e24e5c260 100644 --- a/docs/content/commands/rclone_mkdir.md +++ b/docs/content/commands/rclone_mkdir.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone mkdir" slug: rclone_mkdir url: /commands/rclone_mkdir/ @@ -82,7 +82,7 @@ rclone mkdir remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone mkdir remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone mkdir remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,12 +158,13 @@ rclone mkdir remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index 48a776df3..0626a9219 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone mount" slug: rclone_mount url: /commands/rclone_mount/ @@ -99,12 +99,30 @@ for solutions to make mount mount more reliable. You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. -The default is 0s - no caching - which is recommended for filesystems -which can change outside the control of the kernel. +The default is "1s" which caches files just long enough to avoid +too many callbacks to rclone from the kernel. -If you set it higher ('1s' or '1m' say) then the kernel will call back -to rclone less often making it more efficient, however there may be -strange effects when files change on the remote. +In theory 0s should be the correct value for filesystems which can +change outside the control of the kernel. However this causes quite a +few problems such as +[rclone using too much memory](https://github.com/ncw/rclone/issues/2157), +[rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) +and [excessive time listing directories](https://github.com/ncw/rclone/issues/2095#issuecomment-371141147). + +The kernel can cache the info about a file for the time given by +"--attr-timeout". You may see corruption if the remote file changes +length during this window. It will show up as either a truncated file +or a file with garbage on the end. With "--attr-timeout 1s" this is +very unlikely but not impossible. The higher you set "--attr-timeout" +the more likely it is. The default setting of "1s" is the lowest +setting which mitigates the problems above. + +If you set it higher ('10s' or '1m' say) then the kernel will call +back to rclone less often making it more efficient, however there is +more chance of the corruption issue above. + +If files don't change on the remote outside of the control of rclone +then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. @@ -245,7 +263,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --allow-non-empty Allow mounting over a non-empty directory. --allow-other Allow access to other users. --allow-root Allow access to root user. - --attr-timeout duration Time for which file/directory attributes are cached. + --attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --daemon Run mount as a daemon (background mode). --debug-fuse Debug the FUSE internals - needs -v. --default-permissions Makes kernel enforce access control based on the file mode. @@ -328,7 +346,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -341,23 +359,26 @@ rclone mount remote:path /path/to/mountpoint [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -380,7 +401,9 @@ rclone mount remote:path /path/to/mountpoint [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -399,12 +422,13 @@ rclone mount remote:path /path/to/mountpoint [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index 3cc3f35aa..03630d2f4 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone move" slug: rclone_move url: /commands/rclone_move/ @@ -102,7 +102,7 @@ rclone move source:path dest:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -115,23 +115,26 @@ rclone move source:path dest:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -154,7 +157,9 @@ rclone move source:path dest:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -173,12 +178,13 @@ rclone move source:path dest:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index 3ffb1bfc7..061b48cdd 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone moveto" slug: rclone_moveto url: /commands/rclone_moveto/ @@ -111,7 +111,7 @@ rclone moveto source:path dest:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -124,23 +124,26 @@ rclone moveto source:path dest:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -163,7 +166,9 @@ rclone moveto source:path dest:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -182,12 +187,13 @@ rclone moveto source:path dest:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md index e2943aa8c..4b9832a7b 100644 --- a/docs/content/commands/rclone_ncdu.md +++ b/docs/content/commands/rclone_ncdu.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone ncdu" slug: rclone_ncdu url: /commands/rclone_ncdu/ @@ -109,7 +109,7 @@ rclone ncdu remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -122,23 +122,26 @@ rclone ncdu remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -161,7 +164,9 @@ rclone ncdu remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -180,12 +185,13 @@ rclone ncdu remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md index a7b17ec8a..6d28dcba6 100644 --- a/docs/content/commands/rclone_obscure.md +++ b/docs/content/commands/rclone_obscure.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone obscure" slug: rclone_obscure url: /commands/rclone_obscure/ @@ -82,7 +82,7 @@ rclone obscure password [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone obscure password [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone obscure password [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,12 +158,13 @@ rclone obscure password [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md index cc2f3e769..2446cd1da 100644 --- a/docs/content/commands/rclone_purge.md +++ b/docs/content/commands/rclone_purge.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone purge" slug: rclone_purge url: /commands/rclone_purge/ @@ -86,7 +86,7 @@ rclone purge remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -99,23 +99,26 @@ rclone purge remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -138,7 +141,9 @@ rclone purge remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -157,12 +162,13 @@ rclone purge remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_rc.md b/docs/content/commands/rclone_rc.md index 3116309b4..95c147143 100644 --- a/docs/content/commands/rclone_rc.md +++ b/docs/content/commands/rclone_rc.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone rc" slug: rclone_rc url: /commands/rclone_rc/ @@ -92,7 +92,7 @@ rclone rc commands parameter [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -105,23 +105,26 @@ rclone rc commands parameter [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -144,7 +147,9 @@ rclone rc commands parameter [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -163,12 +168,13 @@ rclone rc commands parameter [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md index 69f3e578a..e93f63a49 100644 --- a/docs/content/commands/rclone_rcat.md +++ b/docs/content/commands/rclone_rcat.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone rcat" slug: rclone_rcat url: /commands/rclone_rcat/ @@ -104,7 +104,7 @@ rclone rcat remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -117,23 +117,26 @@ rclone rcat remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -156,7 +159,9 @@ rclone rcat remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -175,12 +180,13 @@ rclone rcat remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md index c01d7bb4e..7c8941568 100644 --- a/docs/content/commands/rclone_rmdir.md +++ b/docs/content/commands/rclone_rmdir.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone rmdir" slug: rclone_rmdir url: /commands/rclone_rmdir/ @@ -84,7 +84,7 @@ rclone rmdir remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -97,23 +97,26 @@ rclone rmdir remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -136,7 +139,9 @@ rclone rmdir remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -155,12 +160,13 @@ rclone rmdir remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index 7f699bd9d..b396a9ce5 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone rmdirs" slug: rclone_rmdirs url: /commands/rclone_rmdirs/ @@ -92,7 +92,7 @@ rclone rmdirs remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -105,23 +105,26 @@ rclone rmdirs remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -144,7 +147,9 @@ rclone rmdirs remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -163,12 +168,13 @@ rclone rmdirs remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md index a9015fbde..e6a493d68 100644 --- a/docs/content/commands/rclone_serve.md +++ b/docs/content/commands/rclone_serve.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone serve" slug: rclone_serve url: /commands/rclone_serve/ @@ -88,7 +88,7 @@ rclone serve [opts] [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -101,23 +101,26 @@ rclone serve [opts] [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -140,7 +143,9 @@ rclone serve [opts] [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -159,15 +164,16 @@ rclone serve [opts] [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md index 4db8b7fae..9080098d6 100644 --- a/docs/content/commands/rclone_serve_http.md +++ b/docs/content/commands/rclone_serve_http.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone serve http" slug: rclone_serve_http url: /commands/rclone_serve_http/ @@ -26,10 +26,11 @@ control the stats printing. Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. +IPs. By default it only listens on localhost. You can use port +:0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time @@ -278,7 +279,7 @@ rclone serve http remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -291,23 +292,26 @@ rclone serve http remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -330,7 +334,9 @@ rclone serve http remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -349,7 +355,8 @@ rclone serve http remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -357,4 +364,4 @@ rclone serve http remote:path [flags] * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md index f53c98c0f..46dfa54ea 100644 --- a/docs/content/commands/rclone_serve_restic.md +++ b/docs/content/commands/rclone_serve_restic.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone serve restic" slug: rclone_serve_restic url: /commands/rclone_serve_restic/ @@ -88,10 +88,11 @@ these **must** end with /. Eg Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. +IPs. By default it only listens on localhost. You can use port +:0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time @@ -142,6 +143,7 @@ rclone serve restic remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --append-only disallow deletion of repository data --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic @@ -216,7 +218,7 @@ rclone serve restic remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -229,23 +231,26 @@ rclone serve restic remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -268,7 +273,9 @@ rclone serve restic remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -287,7 +294,8 @@ rclone serve restic remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -295,4 +303,4 @@ rclone serve restic remote:path [flags] * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md index 6ccd9eebf..93924c683 100644 --- a/docs/content/commands/rclone_serve_webdav.md +++ b/docs/content/commands/rclone_serve_webdav.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone serve webdav" slug: rclone_serve_webdav url: /commands/rclone_serve_webdav/ @@ -23,10 +23,11 @@ which is undesirable: see https://github.com/golang/go/issues/22577 Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all -IPs. By default it only listens on localhost. +IPs. By default it only listens on localhost. You can use port +:0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address -then using Authentication if advised - see the next section for info. +then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time @@ -275,7 +276,7 @@ rclone serve webdav remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -288,23 +289,26 @@ rclone serve webdav remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -327,7 +331,9 @@ rclone serve webdav remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -346,7 +352,8 @@ rclone serve webdav remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -354,4 +361,4 @@ rclone serve webdav remote:path [flags] * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md index 2a1b9217c..585abe92e 100644 --- a/docs/content/commands/rclone_sha1sum.md +++ b/docs/content/commands/rclone_sha1sum.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone sha1sum" slug: rclone_sha1sum url: /commands/rclone_sha1sum/ @@ -85,7 +85,7 @@ rclone sha1sum remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -98,23 +98,26 @@ rclone sha1sum remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -137,7 +140,9 @@ rclone sha1sum remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -156,12 +161,13 @@ rclone sha1sum remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md index 66c85a15a..cd4637842 100644 --- a/docs/content/commands/rclone_size.md +++ b/docs/content/commands/rclone_size.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone size" slug: rclone_size url: /commands/rclone_size/ @@ -20,6 +20,7 @@ rclone size remote:path [flags] ``` -h, --help help for size + --json format output as JSON ``` ### Options inherited from parent commands @@ -82,7 +83,7 @@ rclone size remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +96,26 @@ rclone size remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +138,9 @@ rclone size remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,12 +159,13 @@ rclone size remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index 5abf9a2f5..fd6f717ff 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone sync" slug: rclone_sync url: /commands/rclone_sync/ @@ -101,7 +101,7 @@ rclone sync source:path dest:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -114,23 +114,26 @@ rclone sync source:path dest:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -153,7 +156,9 @@ rclone sync source:path dest:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -172,12 +177,13 @@ rclone sync source:path dest:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md index 73e243e98..03b38883a 100644 --- a/docs/content/commands/rclone_touch.md +++ b/docs/content/commands/rclone_touch.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone touch" slug: rclone_touch url: /commands/rclone_touch/ @@ -84,7 +84,7 @@ rclone touch remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -97,23 +97,26 @@ rclone touch remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -136,7 +139,9 @@ rclone touch remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -155,12 +160,13 @@ rclone touch remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md index 1c09d1079..5efca5991 100644 --- a/docs/content/commands/rclone_tree.md +++ b/docs/content/commands/rclone_tree.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone tree" slug: rclone_tree url: /commands/rclone_tree/ @@ -125,7 +125,7 @@ rclone tree remote:path [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -138,23 +138,26 @@ rclone tree remote:path [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -177,7 +180,9 @@ rclone tree remote:path [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -196,12 +201,13 @@ rclone tree remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md index 7073ab4fb..f9113ad94 100644 --- a/docs/content/commands/rclone_version.md +++ b/docs/content/commands/rclone_version.md @@ -1,5 +1,5 @@ --- -date: 2018-03-19T10:05:30Z +date: 2018-04-28T11:44:58+01:00 title: "rclone version" slug: rclone_version url: /commands/rclone_version/ @@ -82,7 +82,7 @@ rclone version [flags] --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) -n, --dry-run Do a trial run with no permanent changes - --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters + --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP bodies - may contain sensitive info --exclude stringArray Exclude files matching pattern @@ -95,23 +95,26 @@ rclone version [flags] --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-check-updated Don't check to see if the files change during upload --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --low-level-retries int Number of low level retries to do. (default 10) - --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) - --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --mega-debug If set then output more debug from mega. --memprofile string Write memory profile to file - --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) - --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --modify-window duration Max time diff to be considered the same (default 1ns) --no-check-certificate Do not verify the server SSL certificate. Insecure. --no-gzip-encoding Don't set Accept-Encoding: gzip. @@ -134,7 +137,9 @@ rclone version [flags] --rc-user string User name for authentication. --retries int Retry operations this many times if they fail (default 3) --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 - --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --s3-chunk-size int Chunk size to use for uploading (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --sftp-ask-password Allow asking for SFTP password when needed. --size-only Skip based on size only, not mod-time or checksum --skip-links Don't warn about skipped symlinks. @@ -153,12 +158,13 @@ rclone version [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40") + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.41") -v, --verbose count Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.41 -###### Auto generated by spf13/cobra on 19-Mar-2018 +###### Auto generated by spf13/cobra on 28-Apr-2018 diff --git a/docs/layouts/partials/version.html b/docs/layouts/partials/version.html index 7eb7a1fe1..ffae14377 100644 --- a/docs/layouts/partials/version.html +++ b/docs/layouts/partials/version.html @@ -1 +1 @@ -v1.40 \ No newline at end of file +v1.41 \ No newline at end of file diff --git a/fs/version.go b/fs/version.go index 5c7c2027e..19430f4fd 100644 --- a/fs/version.go +++ b/fs/version.go @@ -1,4 +1,4 @@ package fs // Version of rclone -var Version = "v1.40-DEV" +var Version = "v1.41" diff --git a/rclone.1 b/rclone.1 index 1f0c13509..39090dfb9 100644 --- a/rclone.1 +++ b/rclone.1 @@ -1,7 +1,7 @@ .\"t .\" Automatically generated by Pandoc 1.19.2.1 .\" -.TH "rclone" "1" "Mar 19, 2018" "User Manual" "" +.TH "rclone" "1" "Apr 28, 2018" "User Manual" "" .hy .SH Rclone .PP @@ -40,6 +40,8 @@ IBM COS S3 .IP \[bu] 2 Memset Memstore .IP \[bu] 2 +Mega +.IP \[bu] 2 Microsoft Azure Blob Storage .IP \[bu] 2 Microsoft OneDrive @@ -54,7 +56,7 @@ Openstack Swift .IP \[bu] 2 Oracle Cloud Storage .IP \[bu] 2 -Ownloud +ownCloud .IP \[bu] 2 pCloud .IP \[bu] 2 @@ -321,6 +323,8 @@ HTTP (https://rclone.org/http/) .IP \[bu] 2 Hubic (https://rclone.org/hubic/) .IP \[bu] 2 +Mega (https://rclone.org/mega/) +.IP \[bu] 2 Microsoft Azure Blob Storage (https://rclone.org/azureblob/) .IP \[bu] 2 Microsoft OneDrive (https://rclone.org/onedrive/) @@ -681,6 +685,18 @@ Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. .PP +Eg +.IP +.nf +\f[C] +$\ rclone\ ls\ swift:bucket +\ \ \ \ 60295\ bevajer5jef +\ \ \ \ 90613\ canole +\ \ \ \ 94467\ diwogej7 +\ \ \ \ 37600\ fubuwic +\f[] +.fi +.PP Any of the filtering options can be applied to this commmand. .PP There are several related list commands @@ -699,11 +715,15 @@ There are several related list commands \f[C]lsf\f[] is designed to be human and machine readable. \f[C]lsjson\f[] is designed to be machine readable. .PP -Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default -\- use "\-\-max\-depth 1" to stop the recursion. +Note that \f[C]ls\f[] and \f[C]lsl\f[] recurse by default \- use +"\-\-max\-depth 1" to stop the recursion. .PP -The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by -default \- use "\-R" to make them recurse. +The other list commands \f[C]lsd\f[],\f[C]lsf\f[],\f[C]lsjson\f[] do not +recurse by default \- use "\-R" to make them recurse. +.PP +Listing a non existent directory will produce an error except for +remotes which can\[aq]t have empty directories (eg s3, swift, gcs, etc +\- the bucket based remotes). .IP .nf \f[C] @@ -723,7 +743,34 @@ List all directories/containers/buckets in the path. .SS Synopsis .PP Lists the directories in the source path to standard output. -Recurses by default. +Does not recurse by default. +Use the \-R flag to recurse. +.PP +This command lists the total size of the directory (if known, \-1 if +not), the modification time (if known, the current time if not), the +number of objects in the directory (if known, \-1 if not) and the name +of the directory, Eg +.IP +.nf +\f[C] +$\ rclone\ lsd\ swift: +\ \ \ \ \ \ 494000\ 2018\-04\-26\ 08:43:20\ \ \ \ \ 10000\ 10000files +\ \ \ \ \ \ \ \ \ \ 65\ 2018\-04\-26\ 08:43:20\ \ \ \ \ \ \ \ \ 1\ 1File +\f[] +.fi +.PP +Or +.IP +.nf +\f[C] +$\ rclone\ lsd\ drive:test +\ \ \ \ \ \ \ \ \ \ \-1\ 2016\-10\-17\ 17:41:53\ \ \ \ \ \ \ \ \-1\ 1000files +\ \ \ \ \ \ \ \ \ \ \-1\ 2017\-01\-03\ 14:40:54\ \ \ \ \ \ \ \ \-1\ 2500files +\ \ \ \ \ \ \ \ \ \ \-1\ 2017\-07\-08\ 14:39:28\ \ \ \ \ \ \ \ \-1\ 4000files +\f[] +.fi +.PP +If you just want the directory names use "rclone lsf \-\-dirs\-only". .PP Any of the filtering options can be applied to this commmand. .PP @@ -743,11 +790,15 @@ There are several related list commands \f[C]lsf\f[] is designed to be human and machine readable. \f[C]lsjson\f[] is designed to be machine readable. .PP -Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default -\- use "\-\-max\-depth 1" to stop the recursion. +Note that \f[C]ls\f[] and \f[C]lsl\f[] recurse by default \- use +"\-\-max\-depth 1" to stop the recursion. .PP -The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by -default \- use "\-R" to make them recurse. +The other list commands \f[C]lsd\f[],\f[C]lsf\f[],\f[C]lsjson\f[] do not +recurse by default \- use "\-R" to make them recurse. +.PP +Listing a non existent directory will produce an error except for +remotes which can\[aq]t have empty directories (eg s3, swift, gcs, etc +\- the bucket based remotes). .IP .nf \f[C] @@ -758,7 +809,8 @@ rclone\ lsd\ remote:path\ [flags] .IP .nf \f[C] -\ \ \-h,\ \-\-help\ \ \ help\ for\ lsd +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ help\ for\ lsd +\ \ \-R,\ \-\-recursive\ \ \ Recurse\ into\ the\ listing. \f[] .fi .SS rclone lsl @@ -770,6 +822,18 @@ Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. .PP +Eg +.IP +.nf +\f[C] +$\ rclone\ lsl\ swift:bucket +\ \ \ \ 60295\ 2016\-06\-25\ 18:55:41.062626927\ bevajer5jef +\ \ \ \ 90613\ 2016\-06\-25\ 18:55:43.302607074\ canole +\ \ \ \ 94467\ 2016\-06\-25\ 18:55:43.046609333\ diwogej7 +\ \ \ \ 37600\ 2016\-06\-25\ 18:55:40.814629136\ fubuwic +\f[] +.fi +.PP Any of the filtering options can be applied to this commmand. .PP There are several related list commands @@ -788,11 +852,15 @@ There are several related list commands \f[C]lsf\f[] is designed to be human and machine readable. \f[C]lsjson\f[] is designed to be machine readable. .PP -Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default -\- use "\-\-max\-depth 1" to stop the recursion. +Note that \f[C]ls\f[] and \f[C]lsl\f[] recurse by default \- use +"\-\-max\-depth 1" to stop the recursion. .PP -The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by -default \- use "\-R" to make them recurse. +The other list commands \f[C]lsd\f[],\f[C]lsf\f[],\f[C]lsjson\f[] do not +recurse by default \- use "\-R" to make them recurse. +.PP +Listing a non existent directory will produce an error except for +remotes which can\[aq]t have empty directories (eg s3, swift, gcs, etc +\- the bucket based remotes). .IP .nf \f[C] @@ -863,6 +931,7 @@ rclone\ size\ remote:path\ [flags] .nf \f[C] \ \ \-h,\ \-\-help\ \ \ help\ for\ size +\ \ \ \ \ \ \-\-json\ \ \ format\ output\ as\ JSON \f[] .fi .SS rclone version @@ -1003,6 +1072,9 @@ the newest one. \f[C]\-\-dedupe\-mode\ oldest\f[] \- removes identical files then keeps the oldest one. .IP \[bu] 2 +\f[C]\-\-dedupe\-mode\ largest\f[] \- removes identical files then keeps +the largest one. +.IP \[bu] 2 \f[C]\-\-dedupe\-mode\ rename\f[] \- removes identical files then renames the rest to be different. .PP @@ -1036,6 +1108,86 @@ rclone\ dedupe\ [mode]\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ dedupe \f[] .fi +.SS rclone about +.PP +Get quota information from the remote. +.SS Synopsis +.PP +Get quota information from the remote, like bytes used/free/quota and +bytes used in the trash. +Not supported by all remotes. +.PP +This will print to stdout something like this: +.IP +.nf +\f[C] +Total:\ \ \ 17G +Used:\ \ \ \ 7.444G +Free:\ \ \ \ 1.315G +Trashed:\ 100.000M +Other:\ \ \ 8.241G +\f[] +.fi +.PP +Where the fields are: +.IP \[bu] 2 +Total: total size available. +.IP \[bu] 2 +Used: total size used +.IP \[bu] 2 +Free: total amount this user could upload. +.IP \[bu] 2 +Trashed: total amount in the trash +.IP \[bu] 2 +Other: total amount in other storage (eg Gmail, Google Photos) +.IP \[bu] 2 +Objects: total number of objects in the storage +.PP +Note that not all the backends provide all the fields \- they will be +missing if they are not known for that backend. +Where it is known that the value is unlimited the value will also be +omitted. +.PP +Use the \-\-full flag to see the numbers written out in full, eg +.IP +.nf +\f[C] +Total:\ \ \ 18253611008 +Used:\ \ \ \ 7993453766 +Free:\ \ \ \ 1411001220 +Trashed:\ 104857602 +Other:\ \ \ 8849156022 +\f[] +.fi +.PP +Use the \-\-json flag for a computer readable output, eg +.IP +.nf +\f[C] +{ +\ \ \ \ "total":\ 18253611008, +\ \ \ \ "used":\ 7993453766, +\ \ \ \ "trashed":\ 104857602, +\ \ \ \ "other":\ 8849156022, +\ \ \ \ "free":\ 1411001220 +} +\f[] +.fi +.IP +.nf +\f[C] +rclone\ about\ remote:\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \ \ \ \ \-\-full\ \ \ Full\ numbers\ instead\ of\ SI\ units +\ \ \-h,\ \-\-help\ \ \ help\ for\ about +\ \ \ \ \ \ \-\-json\ \ \ Format\ output\ as\ JSON +\f[] +.fi .SS rclone authorize .PP Remote authorization. @@ -1601,6 +1753,80 @@ rclone\ gendocs\ output_directory\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ gendocs \f[] .fi +.SS rclone hashsum +.PP +Produces an hashsum file for all the objects in the path. +.SS Synopsis +.PP +Produces a hash file for all the objects in the path using the hash +named. +The output is in the same format as the standard md5sum/sha1sum tool. +.PP +Run without a hash to see the list of supported hashes, eg +.IP +.nf +\f[C] +$\ rclone\ hashsum +Supported\ hashes\ are: +\ \ *\ MD5 +\ \ *\ SHA\-1 +\ \ *\ DropboxHash +\ \ *\ QuickXorHash +\f[] +.fi +.PP +Then +.IP +.nf +\f[C] +$\ rclone\ hashsum\ MD5\ remote:path +\f[] +.fi +.IP +.nf +\f[C] +rclone\ hashsum\ \ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ hashsum +\f[] +.fi +.SS rclone link +.PP +Generate public link to file/folder. +.SS Synopsis +.PP +rclone link will create or retrieve a public link to the given file or +folder. +.IP +.nf +\f[C] +rclone\ link\ remote:path/to/file +rclone\ link\ remote:path/to/folder/ +\f[] +.fi +.PP +If successful, the last line of the output will contain the link. +Exact capabilities depend on the remote, but the link will always be +created with the least constraints \[en] e.g. +no expiry, no password protection, accessible without account. +.IP +.nf +\f[C] +rclone\ link\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ link +\f[] +.fi .SS rclone listremotes .PP List all the remotes in the config file. @@ -1634,6 +1860,19 @@ By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. .PP +Eg +.IP +.nf +\f[C] +$\ rclone\ lsf\ swift:bucket +bevajer5jef +canole +diwogej7 +ferejej3gux/ +fubuwic +\f[] +.fi +.PP Use the \-\-format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: @@ -1650,6 +1889,19 @@ h\ \-\ hash So if you wanted the path, size and modification time, you would use \-\-format "pst", or maybe \-\-format "tsp" to put the path last. .PP +Eg +.IP +.nf +\f[C] +$\ rclone\ lsf\ \ \-\-format\ "tsp"\ swift:bucket +2016\-06\-25\ 18:55:41;60295;bevajer5jef +2016\-06\-25\ 18:55:43;90613;canole +2016\-06\-25\ 18:55:43;94467;diwogej7 +2018\-04\-26\ 08:50:45;0;ferejej3gux/ +2016\-06\-25\ 18:55:40;37600;fubuwic +\f[] +.fi +.PP If you specify "h" in the format you will get the MD5 hash by default, use the "\-\-hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn\[aq]t @@ -1665,6 +1917,19 @@ rclone\ lsf\ \-R\ \-\-hash\ MD5\ \-\-format\ hp\ \-\-separator\ "\ \ "\ \-\-file \f[] .fi .PP +Eg +.IP +.nf +\f[C] +$\ rclone\ lsf\ \-R\ \-\-hash\ MD5\ \-\-format\ hp\ \-\-separator\ "\ \ "\ \-\-files\-only\ swift:bucket\ +7908e352297f0f530b84a756f188baa3\ \ bevajer5jef +cd65ac234e6fea5925974a51cdd865cc\ \ canole +03b5341b4f234b9d984d03ad076bae91\ \ diwogej7 +8fd37c3810dd660778137ac3a66cc06d\ \ fubuwic +99713e14a4c4ff553acaf1930fad985b\ \ gixacuh7ku +\f[] +.fi +.PP (Though "rclone md5sum ." is an easier way of typing this.) .PP By default the separator is ";" this can be changed with the @@ -1672,6 +1937,19 @@ By default the separator is ";" this can be changed with the Note that separators aren\[aq]t escaped in the path so putting it last is a good strategy. .PP +Eg +.IP +.nf +\f[C] +$\ rclone\ lsf\ \ \-\-separator\ ","\ \-\-format\ "tshp"\ swift:bucket +2016\-06\-25\ 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef +2016\-06\-25\ 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole +2016\-06\-25\ 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 +2018\-04\-26\ 08:52:53,0,,ferejej3gux/ +2016\-06\-25\ 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic +\f[] +.fi +.PP Any of the filtering options can be applied to this commmand. .PP There are several related list commands @@ -1690,11 +1968,15 @@ There are several related list commands \f[C]lsf\f[] is designed to be human and machine readable. \f[C]lsjson\f[] is designed to be machine readable. .PP -Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default -\- use "\-\-max\-depth 1" to stop the recursion. +Note that \f[C]ls\f[] and \f[C]lsl\f[] recurse by default \- use +"\-\-max\-depth 1" to stop the recursion. .PP -The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by -default \- use "\-R" to make them recurse. +The other list commands \f[C]lsd\f[],\f[C]lsf\f[],\f[C]lsjson\f[] do not +recurse by default \- use "\-R" to make them recurse. +.PP +Listing a non existent directory will produce an error except for +remotes which can\[aq]t have empty directories (eg s3, swift, gcs, etc +\- the bucket based remotes). .IP .nf \f[C] @@ -1768,11 +2050,15 @@ There are several related list commands \f[C]lsf\f[] is designed to be human and machine readable. \f[C]lsjson\f[] is designed to be machine readable. .PP -Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default -\- use "\-\-max\-depth 1" to stop the recursion. +Note that \f[C]ls\f[] and \f[C]lsl\f[] recurse by default \- use +"\-\-max\-depth 1" to stop the recursion. .PP -The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by -default \- use "\-R" to make them recurse. +The other list commands \f[C]lsd\f[],\f[C]lsf\f[],\f[C]lsjson\f[] do not +recurse by default \- use "\-R" to make them recurse. +.PP +Listing a non existent directory will produce an error except for +remotes which can\[aq]t have empty directories (eg s3, swift, gcs, etc +\- the bucket based remotes). .IP .nf \f[C] @@ -1895,12 +2181,35 @@ solutions to make mount mount more reliable. You can use the flag \-\-attr\-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. .PP -The default is 0s \- no caching \- which is recommended for filesystems -which can change outside the control of the kernel. +The default is "1s" which caches files just long enough to avoid too +many callbacks to rclone from the kernel. .PP -If you set it higher (\[aq]1s\[aq] or \[aq]1m\[aq] say) then the kernel +In theory 0s should be the correct value for filesystems which can +change outside the control of the kernel. +However this causes quite a few problems such as rclone using too much +memory (https://github.com/ncw/rclone/issues/2157), rclone not serving +files to +samba (https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) +and excessive time listing +directories (https://github.com/ncw/rclone/issues/2095#issuecomment-371141147). +.PP +The kernel can cache the info about a file for the time given by +"\-\-attr\-timeout". +You may see corruption if the remote file changes length during this +window. +It will show up as either a truncated file or a file with garbage on the +end. +With "\-\-attr\-timeout 1s" this is very unlikely but not impossible. +The higher you set "\-\-attr\-timeout" the more likely it is. +The default setting of "1s" is the lowest setting which mitigates the +problems above. +.PP +If you set it higher (\[aq]10s\[aq] or \[aq]1m\[aq] say) then the kernel will call back to rclone less often making it more efficient, however -there may be strange effects when files change on the remote. +there is more chance of the corruption issue above. +.PP +If files don\[aq]t change on the remote outside of the control of rclone +then there is no chance of corruption. .PP This is the same as setting the attr_timeout option in mount.fuse. .SS Filters @@ -2068,7 +2377,7 @@ rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags] \ \ \ \ \ \ \-\-allow\-non\-empty\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ mounting\ over\ a\ non\-empty\ directory. \ \ \ \ \ \ \-\-allow\-other\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ other\ users. \ \ \ \ \ \ \-\-allow\-root\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ root\ user. -\ \ \ \ \ \ \-\-attr\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ for\ which\ file/directory\ attributes\ are\ cached. +\ \ \ \ \ \ \-\-attr\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ for\ which\ file/directory\ attributes\ are\ cached.\ (default\ 1s) \ \ \ \ \ \ \-\-daemon\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Run\ mount\ as\ a\ daemon\ (background\ mode). \ \ \ \ \ \ \-\-debug\-fuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v. \ \ \ \ \ \ \-\-default\-permissions\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ enforce\ access\ control\ based\ on\ the\ file\ mode. @@ -2170,6 +2479,7 @@ Here are the keys \- press \[aq]?\[aq] to toggle the help on and off \ c\ toggle\ counts \ g\ toggle\ graph \ n,s,C\ sort\ by\ name,size,count +\ ^L\ refresh\ screen \ ?\ to\ toggle\ help\ on\ and\ off \ q/ESC/c\-C\ to\ quit \f[] @@ -2368,9 +2678,10 @@ Use \-\-addr to specify which IP address and port the server should listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all IPs. By default it only listens on localhost. +You can use port :0 to let the OS choose an available port. .PP If you set \-\-addr to listen on a public or LAN accessible IP address -then using Authentication if advised \- see the next section for info. +then using Authentication is advised \- see the next section for info. .PP \-\-server\-read\-timeout and \-\-server\-write\-timeout can be used to control the timeouts on the server. @@ -2685,9 +2996,10 @@ Use \-\-addr to specify which IP address and port the server should listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all IPs. By default it only listens on localhost. +You can use port :0 to let the OS choose an available port. .PP If you set \-\-addr to listen on a public or LAN accessible IP address -then using Authentication if advised \- see the next section for info. +then using Authentication is advised \- see the next section for info. .PP \-\-server\-read\-timeout and \-\-server\-write\-timeout can be used to control the timeouts on the server. @@ -2743,6 +3055,7 @@ rclone\ serve\ restic\ remote:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") +\ \ \ \ \ \ \-\-append\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ disallow\ deletion\ of\ repository\ data \ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) \ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with \ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ restic @@ -2775,9 +3088,10 @@ Use \-\-addr to specify which IP address and port the server should listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all IPs. By default it only listens on localhost. +You can use port :0 to let the OS choose an available port. .PP If you set \-\-addr to listen on a public or LAN accessible IP address -then using Authentication if advised \- see the next section for info. +then using Authentication is advised \- see the next section for info. .PP \-\-server\-read\-timeout and \-\-server\-write\-timeout can be used to control the timeouts on the server. @@ -3253,7 +3567,8 @@ Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". .PP Options which use SIZE use kByte by default. However, a suffix of \f[C]b\f[] for bytes, \f[C]k\f[] for kBytes, -\f[C]M\f[] for MBytes and \f[C]G\f[] for GBytes may be used. +\f[C]M\f[] for MBytes, \f[C]G\f[] for GBytes, \f[C]T\f[] for TBytes and +\f[C]P\f[] for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively. .SS \-\-backup\-dir=DIR .PP @@ -3530,6 +3845,10 @@ This is not active by default. This can be useful for tracking down problems with syncs in combination with the \f[C]\-v\f[] flag. See the Logging section (#logging) for more info. +.PP +Note that if you are using the \f[C]logrotate\f[] program to manage +rclone\[aq]s logs, then you should use the \f[C]copytruncate\f[] option +as rclone doesn\[aq]t have a signal to rotate logs. .SS \-\-log\-level LEVEL .PP This sets the log level for rclone. @@ -3869,6 +4188,21 @@ This can be useful when transferring to a remote which doesn\[aq]t support mod times directly as it is more accurate than a \f[C]\-\-size\-only\f[] check and faster than using \f[C]\-\-checksum\f[]. +.SS \-\-use\-server\-modtime +.PP +Some object\-store backends (e.g, Swift, S3) do not preserve file +modification times (modtime). +On these backends, rclone stores the original modtime as additional +metadata on the object. +By default it will make an API call to retrieve the metadata when the +modtime is needed by an operation. +.PP +Use this flag to disable the extra API call and rely instead on the +server\[aq]s modified time. +In cases such as a local to remote sync, knowing the local file is newer +than the time it was last uploaded to the remote is sufficient. +In those cases, this flag can speed up the process and reduce the number +of API calls necessary. .SS \-v, \-vv, \-\-verbose .PP With \f[C]\-v\f[] rclone will tell you about each file that is @@ -4033,6 +4367,15 @@ Useful for debugging only. .PP Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on. +.SS \-\-dump goroutines +.PP +This dumps a list of the running go\-routines at the end of the command +to standard output. +.SS \-\-dump openfiles +.PP +This dumps a list of the open files at the end of the command. +It uses the \f[C]lsof\f[] command to do that so you\[aq]ll need that +installed to use it. .SS \-\-memprofile=FILE .PP Write memory profile to file. @@ -4992,6 +5335,24 @@ $\ rclone\ rc\ rc/noop\ param1=one\ param2=two Run \f[C]rclone\ rc\f[] on its own to see the help for the installed remote control commands. .SS Supported commands +.SS cache/expire: Purge a remote from cache +.PP +Purge a remote from the cache backend. +Supports either a directory or a file. +Params: \- remote = path to remote (required) \- withData = true/false +to delete cached data (chunks) as well (optional) +.PP +Eg +.IP +.nf +\f[C] +rclone\ rc\ cache/expire\ remote=path/to/sub/folder/ +rclone\ rc\ cache/expire\ remote=/\ withData=true +\f[] +.fi +.SS cache/stats: Get cache stats +.PP +Show statistics for the cache remote. .SS core/bwlimit: Set the bandwidth limit. .PP This sets the bandwidth limit to that passed in. @@ -5000,19 +5361,46 @@ Eg .IP .nf \f[C] -rclone\ core/bwlimit\ rate=1M -rclone\ core/bwlimit\ rate=off +rclone\ rc\ core/bwlimit\ rate=1M +rclone\ rc\ core/bwlimit\ rate=off \f[] .fi -.SS cache/expire: Purge a remote from cache .PP -Purge a remote from the cache backend. -Supports either a directory or a file. -Params: +The format of the parameter is exactly the same as passed to \-\-bwlimit +except only one bandwidth may be specified. +.SS core/memstats: Returns the memory statistics +.PP +This returns the memory statistics of the running program. +What the values mean are explained in the go docs: +https://golang.org/pkg/runtime/#MemStats +.PP +The most interesting values for most people are: .IP \[bu] 2 -remote = path to remote (required) +HeapAlloc: This is the amount of memory rclone is actually using .IP \[bu] 2 -withData = true/false to delete cached data (chunks) as well (optional) +HeapSys: This is the amount of memory rclone has obtained from the OS +.IP \[bu] 2 +Sys: this is the total amount of memory requested from the OS +.IP \[bu] 2 +It is virtual memory so may include unused memory +.SS core/pid: Return PID of current process +.PP +This returns PID of current process. +Useful for stopping rclone process. +.SS rc/error: This returns an error +.PP +This returns an error with the input as part of its error string. +Useful for testing error handling. +.SS rc/list: List all the registered remote control commands +.PP +This lists all the registered remote control commands as a JSON map in +the commands response. +.SS rc/noop: Echo the input to the output parameters +.PP +This echoes the input parameters to the output parameters for testing +purposes. +It can be used to check that rclone is still alive and to check that +parameter passing is working properly. .SS vfs/forget: Forget files or directories in the directory cache. .PP This forgets the paths in the directory cache causing them to be @@ -5036,20 +5424,6 @@ starting with dir will forget that dir, eg rclone\ rc\ vfs/forget\ file=hello\ file2=goodbye\ dir=home/junk \f[] .fi -.SS rc/noop: Echo the input to the output parameters -.PP -This echoes the input parameters to the output parameters for testing -purposes. -It can be used to check that rclone is still alive and to check that -parameter passing is working properly. -.SS rc/error: This returns an error -.PP -This returns an error with the input as part of its error string. -Useful for testing error handling. -.SS rc/list: List all the registered remote control commands -.PP -This lists all the registered remote control commands as a JSON map in -the commands response. .SS Accessing the remote control via HTTP .PP Rclone implements a simple HTTP based protocol. @@ -5353,6 +5727,19 @@ T}@T{ R/W T} T{ +Mega +T}@T{ +\- +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +\- +T} +T{ Microsoft Azure Blob Storage T}@T{ MD5 @@ -5368,7 +5755,7 @@ T} T{ Microsoft OneDrive T}@T{ -SHA1 +SHA1 ‡‡ T}@T{ Yes T}@T{ @@ -5489,6 +5876,10 @@ This is an SHA256 sum of all the 4MB block SHA256s. remote\[aq]s PATH. .PP †† WebDAV supports modtimes when used with Owncloud and Nextcloud only. +.PP +‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive +for business and SharePoint server support Microsoft\[aq]s own +QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). .SS ModTime .PP The cloud storage system supports setting modification times on objects. @@ -5558,7 +5949,7 @@ more efficient. .PP .TS tab(@); -l c c c c c c c. +l c c c c c c c c c. T{ Name T}@T{ @@ -5575,6 +5966,10 @@ T}@T{ ListR T}@T{ StreamUpload +T}@T{ +LinkSharing +T}@T{ +About T} _ T{ @@ -5593,6 +5988,10 @@ T}@T{ No T}@T{ No +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Amazon S3 @@ -5610,6 +6009,10 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Backblaze B2 @@ -5627,6 +6030,10 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Box @@ -5644,6 +6051,10 @@ T}@T{ No T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Dropbox @@ -5661,6 +6072,10 @@ T}@T{ No T}@T{ Yes +T}@T{ +Yes +T}@T{ +Yes T} T{ FTP @@ -5678,6 +6093,10 @@ T}@T{ No T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Google Cloud Storage @@ -5695,6 +6114,10 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Google Drive @@ -5712,6 +6135,10 @@ T}@T{ No T}@T{ Yes +T}@T{ +Yes +T}@T{ +Yes T} T{ HTTP @@ -5729,6 +6156,10 @@ T}@T{ No T}@T{ No +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Hubic @@ -5746,6 +6177,31 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +Yes +T} +T{ +Mega +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +Yes T} T{ Microsoft Azure Blob Storage @@ -5763,6 +6219,10 @@ T}@T{ Yes T}@T{ No +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Microsoft OneDrive @@ -5780,6 +6240,10 @@ T}@T{ No T}@T{ No +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +Yes T} T{ Openstack Swift @@ -5797,6 +6261,10 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +Yes T} T{ pCloud @@ -5814,6 +6282,10 @@ T}@T{ No T}@T{ No +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +Yes T} T{ QingStor @@ -5831,6 +6303,10 @@ T}@T{ Yes T}@T{ No +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ SFTP @@ -5848,6 +6324,10 @@ T}@T{ No T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ WebDAV @@ -5865,6 +6345,10 @@ T}@T{ No T}@T{ Yes ‡ +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ Yandex Disk @@ -5882,6 +6366,10 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +No #2178 (https://github.com/ncw/rclone/issues/2178) +T}@T{ +No T} T{ The local filesystem @@ -5899,6 +6387,10 @@ T}@T{ No T}@T{ Yes +T}@T{ +No +T}@T{ +Yes T} .TE .SS Purge @@ -5959,6 +6451,18 @@ advance. This allows certain operations to work without spooling the file to local disk first, e.g. \f[C]rclone\ rcat\f[]. +.SS LinkSharing +.PP +Sets the necessary permissions on a file or folder and prints a link +that allows others to access them, even if they don\[aq]t have an +account on the particular cloud provider. +.SS About +.PP +This is used to fetch quota information from the remote, like bytes +used/free/quota and bytes used in the trash. +.PP +If the server can\[aq]t do \f[C]About\f[] then \f[C]rclone\ about\f[] +will return an error. .SS Alias .PP The \f[C]alias\f[] remote provides a new name for another remote. @@ -6340,12 +6844,65 @@ To avoid this problem, use \f[C]\-\-max\-size\ 50000M\f[] option to limit the maximum size of uploaded files. Note that \f[C]\-\-max\-size\f[] does not split files into segments, it only ignores files over this size. -.SS Amazon S3 +.SS Amazon S3 Storage Providers +.PP +The S3 backend can be used with a number of different providers: +.IP \[bu] 2 +AWS S3 +.IP \[bu] 2 +Ceph +.IP \[bu] 2 +DigitalOcean Spaces +.IP \[bu] 2 +Dreamhost +.IP \[bu] 2 +IBM COS S3 +.IP \[bu] 2 +Minio +.IP \[bu] 2 +Wasabi .PP Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg \f[C]remote:bucket/path/to/dir\f[]. .PP +Once you have made a remote (see the provider specific section above) +you can use it like this: +.PP +See all buckets +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new bucket +.IP +.nf +\f[C] +rclone\ mkdir\ remote:bucket +\f[] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone\ ls\ remote:bucket +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any +excess files in the bucket. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:bucket +\f[] +.fi +.SS AWS S3 +.PP Here is an example of making an s3 configuration. First run .IP @@ -6371,7 +6928,7 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "alias" \ 2\ /\ Amazon\ Drive \ \ \ \\\ "amazon\ cloud\ drive" -\ 3\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ 3\ /\ Amazon\ S3\ Compliant\ Storage\ Providers\ (AWS,\ Ceph,\ Dreamhost,\ IBM\ COS,\ Minio) \ \ \ \\\ "s3" \ 4\ /\ Backblaze\ B2 \ \ \ \\\ "b2" @@ -6379,6 +6936,25 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value 23\ /\ http\ Connection \ \ \ \\\ "http" Storage>\ s3 +Choose\ your\ S3\ provider. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Web\ Services\ (AWS)\ S3 +\ \ \ \\\ "AWS" +\ 2\ /\ Ceph\ Object\ Storage +\ \ \ \\\ "Ceph" +\ 3\ /\ Digital\ Ocean\ Spaces +\ \ \ \\\ "DigitalOcean" +\ 4\ /\ Dreamhost\ DreamObjects +\ \ \ \\\ "Dreamhost" +\ 5\ /\ IBM\ COS\ S3 +\ \ \ \\\ "IBMCOS" +\ 6\ /\ Minio\ Object\ Storage +\ \ \ \\\ "Minio" +\ 7\ /\ Wasabi\ Object\ Storage +\ \ \ \\\ "Wasabi" +\ 8\ /\ Any\ other\ S3\ compatible\ provider +\ \ \ \\\ "Other" +provider>\ 1 Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step @@ -6390,7 +6966,7 @@ AWS\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ cre access_key_id>\ XXX AWS\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. secret_access_key>\ YYY -Region\ to\ connect\ to.\ \ Leave\ blank\ if\ you\ are\ using\ an\ S3\ clone\ and\ you\ don\[aq]t\ have\ a\ region. +Region\ to\ connect\ to. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ /\ The\ default\ endpoint\ \-\ a\ good\ choice\ if\ you\ are\ unsure. \ 1\ |\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. @@ -6435,13 +7011,9 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ /\ South\ America\ (Sao\ Paulo)\ Region 14\ |\ Needs\ location\ constraint\ sa\-east\-1. \ \ \ \\\ "sa\-east\-1" -\ \ \ /\ Use\ this\ only\ if\ v4\ signatures\ don\[aq]t\ work,\ eg\ pre\ Jewel/v10\ CEPH. -15\ |\ Set\ this\ and\ make\ sure\ you\ set\ the\ endpoint. -\ \ \ \\\ "other\-v2\-signature" region>\ 1 Endpoint\ for\ S3\ API. Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region. -Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph. endpoint>\ Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value @@ -6510,10 +7082,14 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "REDUCED_REDUNDANCY" \ 4\ /\ Standard\ Infrequent\ Access\ storage\ class \ \ \ \\\ "STANDARD_IA" +\ 5\ /\ One\ Zone\ Infrequent\ Access\ storage\ class +\ \ \ \\\ "ONEZONE_IA" storage_class>\ 1 Remote\ config \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [remote] +type\ =\ s3 +provider\ =\ AWS env_auth\ =\ false access_key_id\ =\ XXX secret_access_key\ =\ YYY @@ -6527,42 +7103,7 @@ storage_class\ =\ y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -This remote is called \f[C]remote\f[] and can now be used like this -.PP -See all buckets -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -Make a new bucket -.IP -.nf -\f[C] -rclone\ mkdir\ remote:bucket -\f[] -.fi -.PP -List the contents of a bucket -.IP -.nf -\f[C] -rclone\ ls\ remote:bucket -\f[] -.fi -.PP -Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any -excess files in the bucket. -.IP -.nf -\f[C] -rclone\ sync\ /home/local/directory\ remote:bucket +y/e/d>\ \f[] .fi .SS \-\-fast\-list @@ -6570,6 +7111,21 @@ rclone\ sync\ /home/local/directory\ remote:bucket This remote supports \f[C]\-\-fast\-list\f[] which allows you to use fewer transactions in exchange for more memory. See the rclone docs (/docs/#fast-list) for more details. +.SS \-\-update and \-\-use\-server\-modtime +.PP +As noted below, the modified time is stored on metadata on the object. +It is used by default for all operations that require checking the time +a file was last updated. +It allows rclone to treat the remote more like a true filesystem, but it +is inefficient because it requires an extra API call to retrieve the +metadata. +.PP +For many operations, the time the object was last uploaded to the remote +is sufficient to determine if it is "dirty". +By using \f[C]\-\-update\f[] along with +\f[C]\-\-use\-server\-modtime\f[], you can avoid the extra API call and +simply upload files whose local modtime is newer than the time it was +last uploaded. .SS Modified time .PP The modified time is stored as metadata on the object as @@ -6591,22 +7147,22 @@ error, \f[C]incorrect\ region,\ the\ bucket\ is\ not\ in\ \[aq]XXX\[aq]\ region\f[]. .SS Authentication .PP -There are two ways to supply \f[C]rclone\f[] with a set of AWS -credentials. -In order of precedence: +There are a number of ways to supply \f[C]rclone\f[] with a set of AWS +credentials, with and without using the environment. +.PP +The different authentication methods are tried in this order: .IP \[bu] 2 -Directly in the rclone configuration file (as configured by -\f[C]rclone\ config\f[]) +Directly in the rclone configuration file (\f[C]env_auth\ =\ false\f[] +in the config file): +.IP \[bu] 2 +\f[C]access_key_id\f[] and \f[C]secret_access_key\f[] are required. .IP \[bu] 2 -set \f[C]access_key_id\f[] and \f[C]secret_access_key\f[]. \f[C]session_token\f[] can be optionally set when using AWS STS. .IP \[bu] 2 -Runtime configuration: +Runtime configuration (\f[C]env_auth\ =\ true\f[] in the config file): .IP \[bu] 2 -set \f[C]env_auth\f[] to \f[C]true\f[] in the config file -.IP \[bu] 2 -Exporting the following environment variables before running -\f[C]rclone\f[] +Export the following environment variables before running +\f[C]rclone\f[]: .RS 2 .IP \[bu] 2 Access Key ID: \f[C]AWS_ACCESS_KEY_ID\f[] or \f[C]AWS_ACCESS_KEY\f[] @@ -6614,12 +7170,29 @@ Access Key ID: \f[C]AWS_ACCESS_KEY_ID\f[] or \f[C]AWS_ACCESS_KEY\f[] Secret Access Key: \f[C]AWS_SECRET_ACCESS_KEY\f[] or \f[C]AWS_SECRET_KEY\f[] .IP \[bu] 2 -Session Token: \f[C]AWS_SESSION_TOKEN\f[] +Session Token: \f[C]AWS_SESSION_TOKEN\f[] (optional) .RE .IP \[bu] 2 -Running \f[C]rclone\f[] in an ECS task with an IAM role +Or, use a named +profile (https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html): +.RS 2 .IP \[bu] 2 -Running \f[C]rclone\f[] on an EC2 instance with an IAM role +Profile files are standard files used by AWS CLI tools +.IP \[bu] 2 +By default it will use the profile in your home directory (eg +\f[C]~/.aws/credentials\f[] on unix based systems) file and the +"default" profile, to change set these environment variables: +.RS 2 +.IP \[bu] 2 +\f[C]AWS_SHARED_CREDENTIALS_FILE\f[] to control which file. +.IP \[bu] 2 +\f[C]AWS_PROFILE\f[] to control which profile to use. +.RE +.RE +.IP \[bu] 2 +Or, run \f[C]rclone\f[] in an ECS task with an IAM role (AWS only). +.IP \[bu] 2 +Or, run \f[C]rclone\f[] on an EC2 instance with an IAM role (AWS only). .PP If none of these option actually end up providing \f[C]rclone\f[] with AWS credentials then S3 interaction will be non\-authenticated (see @@ -6724,47 +7297,40 @@ STANDARD \- default storage class .IP \[bu] 2 STANDARD_IA \- for less frequently accessed data (e.g backups) .IP \[bu] 2 +ONEZONE_IA \- for storing data in only one Availability Zone +.IP \[bu] 2 REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy) +.SS \-\-s3\-chunk\-size=SIZE +.PP +Any files larger than this will be uploaded in chunks of this size. +The default is 5MB. +The minimum is 5MB. +.PP +Note that 2 chunks of this size are buffered in memory per transfer. +.PP +If you are transferring large files over high speed links and you have +enough memory, then increasing this will speed up the transfers. .SS Anonymous access to public buckets .PP If you want to use rclone to access a public bucket, configure with a blank \f[C]access_key_id\f[] and \f[C]secret_access_key\f[]. -Eg +Your config should end up looking like this: .IP .nf \f[C] -No\ remotes\ found\ \-\ make\ a\ new\ one -n)\ New\ remote -q)\ Quit\ config -n/q>\ n -name>\ anons3 -What\ type\ of\ source\ is\ it? -Choose\ a\ number\ from\ below -\ 1)\ amazon\ cloud\ drive -\ 2)\ b2 -\ 3)\ drive -\ 4)\ dropbox -\ 5)\ google\ cloud\ storage -\ 6)\ swift -\ 7)\ hubic -\ 8)\ local -\ 9)\ onedrive -10)\ s3 -11)\ yandex -type>\ 10 -Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ *\ Enter\ AWS\ credentials\ in\ the\ next\ step -\ 1)\ false -\ *\ Get\ AWS\ credentials\ from\ the\ environment\ (env\ vars\ or\ IAM) -\ 2)\ true -env_auth>\ 1 -AWS\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. -access_key_id> -AWS\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. -secret_access_key> -\&... +[anons3] +type\ =\ s3 +provider\ =\ AWS +env_auth\ =\ false +access_key_id\ =\ +secret_access_key\ =\ +region\ =\ us\-east\-1 +endpoint\ =\ +location_constraint\ =\ +acl\ =\ private +server_side_encryption\ =\ +storage_class\ =\ \f[] .fi .PP @@ -6791,15 +7357,16 @@ You should end up with something like this in your config: \f[C] [ceph] type\ =\ s3 +provider\ =\ Ceph env_auth\ =\ false access_key_id\ =\ XXX secret_access_key\ =\ YYY -region\ =\ +region\ = endpoint\ =\ https://ceph.endpoint.example.com -location_constraint\ =\ -acl\ =\ -server_side_encryption\ =\ -storage_class\ =\ +location_constraint\ = +acl\ = +server_side_encryption\ = +storage_class\ = \f[] .fi .PP @@ -6843,6 +7410,8 @@ You should end up with something like this in your config: .nf \f[C] [dreamobjects] +type\ =\ s3 +provider\ =\ DreamHost env_auth\ =\ false access_key_id\ =\ your_access_key secret_access_key\ =\ your_secret_key @@ -6883,11 +7452,11 @@ Storage>\ s3 env_auth>\ 1 access_key_id>\ YOUR_ACCESS_KEY secret_access_key>\ YOUR_SECRET_KEY -region>\ +region> endpoint>\ nyc3.digitaloceanspaces.com -location_constraint>\ -acl>\ -storage_class>\ +location_constraint> +acl> +storage_class> \f[] .fi .PP @@ -6897,15 +7466,16 @@ The resulting configuration file should look like: \f[C] [spaces] type\ =\ s3 +provider\ =\ DigitalOcean env_auth\ =\ false access_key_id\ =\ YOUR_ACCESS_KEY secret_access_key\ =\ YOUR_SECRET_KEY -region\ =\ +region\ = endpoint\ =\ nyc3.digitaloceanspaces.com -location_constraint\ =\ -acl\ =\ -server_side_encryption\ =\ -storage_class\ =\ +location_constraint\ = +acl\ = +server_side_encryption\ = +storage_class\ = \f[] .fi .PP @@ -6925,7 +7495,7 @@ dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM's Cloud Object Storage System (formerly Cleversafe). -For more information visit: (https://www.ibm.com/cloud/object\-storage) +For more information visit: (http://www.ibm.com/cloud/object\-storage) .PP To configure access to IBM COS S3, follow the steps below: .IP " 1." 4 @@ -6949,7 +7519,7 @@ Enter the name for the configuration .IP .nf \f[C] -name>\ IBM\-COS\-XREGION +name>\ \f[] .fi .RE @@ -6959,30 +7529,41 @@ Select "s3" storage. .IP .nf \f[C] -Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive +1\ /\ Alias\ for\ a\ existing\ remote +\\\ "alias" +2\ /\ Amazon\ Drive \\\ "amazon\ cloud\ drive" -2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio,\ IBM\ COS(S3)) +3\ /\ Amazon\ S3\ Complaint\ Storage\ Providers\ (Dreamhost,\ Ceph,\ Minio,\ IBM\ COS) \\\ "s3" -3\ /\ Backblaze\ B2 -Storage>\ 2 +4\ /\ Backblaze\ B2 +\\\ "b2" +[snip] +23\ /\ http\ Connection +\\\ "http" +Storage>\ 3 \f[] .fi .RE .IP " 4." 4 -Select "Enter AWS credentials\&..." +Select IBM COS as the S3 Storage Provider. .RS 4 .IP .nf \f[C] -Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. +Choose\ the\ S3\ provider. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step -\\\ "false" -\ 2\ /\ Get\ AWS\ credentials\ from\ the\ environment\ (env\ vars\ or\ IAM) -\\\ "true" -env_auth>\ 1 +\ 1\ /\ Choose\ this\ option\ to\ configure\ Storage\ to\ AWS\ S3 +\ \ \ \\\ "AWS" +\ 2\ /\ Choose\ this\ option\ to\ configure\ Storage\ to\ Ceph\ Systems +\ \\\ "Ceph" +\ 3\ /\ \ Choose\ this\ option\ to\ configure\ Storage\ to\ Dreamhost +\ \\\ "Dreamhost" +\ \ \ 4\ /\ Choose\ this\ option\ to\ the\ configure\ Storage\ to\ IBM\ COS\ S3 +\ \\\ "IBMCOS" +\ 5\ /\ Choose\ this\ option\ to\ the\ configure\ Storage\ to\ Minio +\ \\\ "Minio" +\ Provider>4 \f[] .fi .RE @@ -7000,71 +7581,84 @@ secret_access_key>\ <> .fi .RE .IP " 6." 4 -Select "other\-v4\-signature" region. +Specify the endpoint for IBM COS. +For Public IBM COS, choose from the option below. +For On Premise IBM COS, enter an enpoint address. .RS 4 .IP .nf \f[C] -Region\ to\ connect\ to. +Endpoint\ for\ IBM\ COS\ S3\ API. +Specify\ if\ using\ an\ IBM\ COS\ On\ Premise. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -/\ The\ default\ endpoint\ \-\ a\ good\ choice\ if\ you\ are\ unsure. -\ 1\ |\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. -|\ Leave\ location\ constraint\ empty. -\\\ "us\-east\-1" -/\ US\ East\ (Ohio)\ Region -2\ |\ Needs\ location\ constraint\ us\-east\-2. -\\\ "us\-east\-2" -/\ US\ West\ (Oregon)\ Region -\&...\&... -15\ |\ eg\ Ceph/Dreamhost -|\ set\ this\ and\ make\ sure\ you\ set\ the\ endpoint. -\\\ "other\-v2\-signature" -/\ If\ using\ an\ S3\ clone\ that\ understands\ v4\ signatures\ set\ this -16\ |\ and\ make\ sure\ you\ set\ the\ endpoint. -\\\ "other\-v4\-signature -region>\ 16 +\ 1\ /\ US\ Cross\ Region\ Endpoint +\ \ \ \\\ "s3\-api.us\-geo.objectstorage.softlayer.net" +\ 2\ /\ US\ Cross\ Region\ Dallas\ Endpoint +\ \ \ \\\ "s3\-api.dal.us\-geo.objectstorage.softlayer.net" +\ 3\ /\ US\ Cross\ Region\ Washington\ DC\ Endpoint +\ \ \ \\\ "s3\-api.wdc\-us\-geo.objectstorage.softlayer.net" +\ 4\ /\ US\ Cross\ Region\ San\ Jose\ Endpoint +\ \ \ \\\ "s3\-api.sjc\-us\-geo.objectstorage.softlayer.net" +\ 5\ /\ US\ Cross\ Region\ Private\ Endpoint +\ \ \ \\\ "s3\-api.us\-geo.objectstorage.service.networklayer.com" +\ 6\ /\ US\ Cross\ Region\ Dallas\ Private\ Endpoint +\ \ \ \\\ "s3\-api.dal\-us\-geo.objectstorage.service.networklayer.com" +\ 7\ /\ US\ Cross\ Region\ Washington\ DC\ Private\ Endpoint +\ \ \ \\\ "s3\-api.wdc\-us\-geo.objectstorage.service.networklayer.com" +\ 8\ /\ US\ Cross\ Region\ San\ Jose\ Private\ Endpoint +\ \ \ \\\ "s3\-api.sjc\-us\-geo.objectstorage.service.networklayer.com" +\ 9\ /\ US\ Region\ East\ Endpoint +\ \ \ \\\ "s3.us\-east.objectstorage.softlayer.net" +10\ /\ US\ Region\ East\ Private\ Endpoint +\ \ \ \\\ "s3.us\-east.objectstorage.service.networklayer.com" +11\ /\ US\ Region\ South\ Endpoint +[snip] +34\ /\ Toronto\ Single\ Site\ Private\ Endpoint +\ \ \ \\\ "s3.tor01.objectstorage.service.networklayer.com" +endpoint>1 \f[] .fi .RE .IP " 7." 4 -Enter the endpoint FQDN. +Specify a IBM COS Location Constraint. +The location constraint must match endpoint when using IBM Cloud Public. +For on\-prem COS, do not make a selection from this list, hit enter .RS 4 .IP .nf \f[C] -Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region. -Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph. -endpoint>\ s3\-api.us\-geo.objectstorage.softlayer.net +\ 1\ /\ US\ Cross\ Region\ Standard +\ \ \ \\\ "us\-standard" +\ 2\ /\ US\ Cross\ Region\ Vault +\ \ \ \\\ "us\-vault" +\ 3\ /\ US\ Cross\ Region\ Cold +\ \ \ \\\ "us\-cold" +\ 4\ /\ US\ Cross\ Region\ Flex +\ \ \ \\\ "us\-flex" +\ 5\ /\ US\ East\ Region\ Standard +\ \ \ \\\ "us\-east\-standard" +\ 6\ /\ US\ East\ Region\ Vault +\ \ \ \\\ "us\-east\-vault" +\ 7\ /\ US\ East\ Region\ Cold +\ \ \ \\\ "us\-east\-cold" +\ 8\ /\ US\ East\ Region\ Flex +\ \ \ \\\ "us\-east\-flex" +\ 9\ /\ US\ South\ Region\ Standard +\ \ \ \\\ "us\-south\-standard" +10\ /\ US\ South\ Region\ Vault +\ \ \ \\\ "us\-south\-vault" +[snip] +32\ /\ Toronto\ Flex +\ \ \ \\\ "tor01\-flex" +location_constraint>1 \f[] .fi .RE .IP " 8." 4 -Specify a IBM COS Location Constraint. -.RS 4 -.IP "a." 3 -Currently, the only IBM COS values for LocationConstraint are: -us\-standard / us\-vault / us\-cold / us\-flex us\-east\-standard / -us\-east\-vault / us\-east\-cold / us\-east\-flex us\-south\-standard / -us\-south\-vault / us\-south\-cold / us\-south\-flex eu\-standard / -eu\-vault / eu\-cold / eu\-flex -.RS 4 -.IP -.nf -\f[C] -Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. -\\\ "" -\ 2\ /\ US\ East\ (Ohio)\ Region. -\\\ "us\-east\-2" -\ \&...\&... -location_constraint>\ us\-standard -\f[] -.fi -.RE -.RE -.IP " 9." 4 Specify a canned ACL. +IBM Cloud (Strorage) supports "public\-read" and "private". +IBM Cloud(Infra) supports all the canned ACLs. +On\-Premise COS supports all the canned ACLs. .RS 4 .IP .nf @@ -7072,104 +7666,38 @@ Specify a canned ACL. Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3. For\ more\ info\ visit\ https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default). -\\\ "private" -2\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ access. -\\\ "public\-read" -/\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ and\ WRITE\ access. -\ 3\ |\ Granting\ this\ on\ a\ bucket\ is\ generally\ not\ recommended. -\\\ "public\-read\-write" -\ 4\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AuthenticatedUsers\ group\ gets\ READ\ access. -\\\ "authenticated\-read" -/\ Object\ owner\ gets\ FULL_CONTROL.\ Bucket\ owner\ gets\ READ\ access. -5\ |\ If\ you\ specify\ this\ canned\ ACL\ when\ creating\ a\ bucket,\ Amazon\ S3\ ignores\ it. -\\\ "bucket\-owner\-read" -/\ Both\ the\ object\ owner\ and\ the\ bucket\ owner\ get\ FULL_CONTROL\ over\ the\ object. -\ 6\ |\ If\ you\ specify\ this\ canned\ ACL\ when\ creating\ a\ bucket,\ Amazon\ S3\ ignores\ it. -\\\ "bucket\-owner\-full\-control" +\ \ 1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default).\ This\ acl\ is\ available\ on\ IBM\ Cloud\ (Infra),\ IBM\ Cloud\ (Storage),\ On\-Premise\ COS +\ \ \\\ "private" +\ \ 2\ \ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ access.\ This\ acl\ is\ available\ on\ IBM\ Cloud\ (Infra),\ IBM\ Cloud\ (Storage),\ On\-Premise\ IBM\ COS +\ \ \\\ "public\-read" +\ \ 3\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ and\ WRITE\ access.\ This\ acl\ is\ available\ on\ IBM\ Cloud\ (Infra),\ On\-Premise\ IBM\ COS +\ \ \\\ "public\-read\-write" +\ \ 4\ \ /\ Owner\ gets\ FULL_CONTROL.\ The\ AuthenticatedUsers\ group\ gets\ READ\ access.\ Not\ supported\ on\ Buckets.\ This\ acl\ is\ available\ on\ IBM\ Cloud\ (Infra)\ and\ On\-Premise\ IBM\ COS +\ \ \\\ "authenticated\-read" acl>\ 1 \f[] .fi .RE -.IP "10." 4 -Set the SSE option to "None". -.RS 4 -.IP -.nf -\f[C] -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ None -\\\ "" -2\ /\ AES256 -\\\ "AES256" -server_side_encryption>\ 1 -\f[] -.fi -.RE -.IP "11." 4 -Set the storage class to "None" (IBM COS uses the LocationConstraint at -the bucket level). -.RS 4 -.IP -.nf -\f[C] -The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ S3. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -1\ /\ Default -\\\ "" -\ 2\ /\ Standard\ storage\ class -\\\ "STANDARD" -\ 3\ /\ Reduced\ redundancy\ storage\ class -\\\ "REDUCED_REDUNDANCY" -\ 4\ /\ Standard\ Infrequent\ Access\ storage\ class -\ \\\ "STANDARD_IA" -storage_class> -\f[] -.fi -.RE -.IP "12." 4 +.IP " 9." 4 Review the displayed configuration and accept to save the "remote" then quit. +The config file should look like this .RS 4 .IP .nf \f[C] -Remote\ config -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[IBM\-COS\-XREGION] -env_auth\ =\ false -access_key_id\ =\ <> -secret_access_key\ =\ <> -region\ =\ other\-v4\-signature +[xxx] +type\ =\ s3 +Provider\ =\ IBMCOS +access_key_id\ =\ xxx +secret_access_key\ =\ yyy endpoint\ =\ s3\-api.us\-geo.objectstorage.softlayer.net location_constraint\ =\ us\-standard acl\ =\ private -server_side_encryption\ =\ -storage_class\ = -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -Remote\ config -Current\ remotes: - -Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type -====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ==== -IBM\-COS\-XREGION\ \ \ \ \ \ s3 - -e)\ Edit\ existing\ remote -n)\ New\ remote -d)\ Delete\ remote -r)\ Rename\ remote -c)\ Copy\ remote -s)\ Set\ configuration\ password -q)\ Quit\ config -e/n/d/r/c/s/q>\ q \f[] .fi .RE -.IP "13." 4 +.IP "10." 4 Execute rclone commands .RS 4 .IP @@ -7251,6 +7779,8 @@ Which makes the config file look like this .nf \f[C] [minio] +type\ =\ s3 +provider\ =\ Minio env_auth\ =\ false access_key_id\ =\ USWUXHGYZQYFYFFIT3RE secret_access_key\ =\ MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 @@ -7322,21 +7852,21 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. \ \ \ \\\ "" [snip] -location_constraint>\ +location_constraint> Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3. For\ more\ info\ visit\ https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default). \ \ \ \\\ "private" [snip] -acl>\ +acl> The\ server\-side\ encryption\ algorithm\ used\ when\ storing\ this\ object\ in\ S3. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ None \ \ \ \\\ "" \ 2\ /\ AES256 \ \ \ \\\ "AES256" -server_side_encryption>\ +server_side_encryption> The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ S3. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Default @@ -7347,7 +7877,7 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "REDUCED_REDUNDANCY" \ 4\ /\ Standard\ Infrequent\ Access\ storage\ class \ \ \ \\\ "STANDARD_IA" -storage_class>\ +storage_class> Remote\ config \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [wasabi] @@ -7356,10 +7886,10 @@ access_key_id\ =\ YOURACCESSKEY secret_access_key\ =\ YOURSECRETACCESSKEY region\ =\ us\-east\-1 endpoint\ =\ s3.wasabisys.com -location_constraint\ =\ -acl\ =\ -server_side_encryption\ =\ -storage_class\ =\ +location_constraint\ = +acl\ = +server_side_encryption\ = +storage_class\ = \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote @@ -7373,15 +7903,17 @@ This will leave the config file looking like this. .nf \f[C] [wasabi] +type\ =\ s3 +provider\ =\ Wasabi env_auth\ =\ false access_key_id\ =\ YOURACCESSKEY secret_access_key\ =\ YOURSECRETACCESSKEY -region\ =\ us\-east\-1 +region\ = endpoint\ =\ s3.wasabisys.com -location_constraint\ =\ -acl\ =\ -server_side_encryption\ =\ -storage_class\ =\ +location_constraint\ = +acl\ = +server_side_encryption\ = +storage_class\ = \f[] .fi .SS Backblaze B2 @@ -8309,6 +8841,9 @@ Flag to clear all the cached data for this remote before. .PP The size of a chunk (partial file data). Use lower numbers for slower connections. +If the chunk size is changed, any downloaded chunks will be invalid and +cache\-chunk\-path will need to be cleared or unexpected EOF errors will +occur. .PP \f[B]Default\f[]: 5M .SS \-\-cache\-total\-chunk\-size=SIZE @@ -9046,7 +9581,7 @@ can\[aq]t store. There is a full list of them in the "Ignored Files" section of this document (https://www.dropbox.com/en/help/145). Rclone will issue an error message -\f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempt to +\f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempts to upload one of those file names, but the sync won\[aq]t fail. .PP If you have more than 10,000 files in a directory then @@ -9436,6 +9971,10 @@ These credentials are what rclone will use for authentication. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the \f[C]service_account_file\f[] prompt and rclone won\[aq]t use the browser based authentication flow. +If you\[aq]d rather stuff the contents of the credentials file into the +rclone config file, you can set \f[C]service_account_credentials\f[] +with the actual contents of the file instead, or set the equivalent +environment variable. .SS \-\-fast\-list .PP This remote supports \f[C]\-\-fast\-list\f[] which allows you to use @@ -9659,6 +10198,10 @@ To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the \f[C]service_account_file\f[] prompt during \f[C]rclone\ config\f[] and rclone won\[aq]t use the browser based authentication flow. +If you\[aq]d rather stuff the contents of the credentials file into the +rclone config file, you can set \f[C]service_account_credentials\f[] +with the actual contents of the file instead, or set the equivalent +environment variable. .SS Use case \- Google Apps/G\-suite account and individual Drive .PP Let\[aq]s say that you are the administrator of a Google Apps (old) or @@ -9812,6 +10355,13 @@ If you wish to empty your trash you can use the \f[C]rclone\ cleanup\ remote:\f[] command which will permanently delete all your trashed files. This command does not take any path arguments. +.SS Quota information +.PP +To view your current quota you can use the +\f[C]rclone\ about\ remote:\f[] command which will display your usage +limit (quota), the usage in Google Drive, the size of all files in the +Trash and the space used by other Google services such as Gmail. +This command does not take any path arguments. .SS Specific options .PP Here are the command line options specific to this cloud storage system. @@ -10455,6 +11005,124 @@ credentials and ignores the expires field returned by the Hubic API. The Swift API doesn\[aq]t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won\[aq]t check or use the MD5SUM for these. +.SS Mega +.PP +Mega (https://mega.nz/) is a cloud storage and file hosting service +known for its security feature where all files are encrypted locally +before they are uploaded. +This prevents anyone (including employees of Mega) from accessing the +files without knowledge of the key used for encryption. +.PP +This is an rclone backend for Mega which supports the file transfer +features of Mega using the same client side encryption. +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Alias\ for\ a\ existing\ remote +\ \ \ \\\ "alias" +[snip] +14\ /\ Mega +\ \ \ \\\ "mega" +[snip] +23\ /\ http\ Connection +\ \ \ \\\ "http" +Storage>\ mega +User\ name +user>\ you\@example.com +Password. +y)\ Yes\ type\ in\ my\ own\ password +g)\ Generate\ random\ password +n)\ No\ leave\ this\ optional\ password\ blank +y/g/n>\ y +Enter\ the\ password: +password: +Confirm\ the\ password: +password: +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ mega +user\ =\ you\@example.com +pass\ =\ ***\ ENCRYPTED\ *** +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +List directories in top level of your Mega +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your Mega +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to an Mega directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Modified time and hashes +.PP +Mega does not support modification times or hashes yet. +.SS Duplicated files +.PP +Mega can have two files with exactly the same name and path (unlike a +normal file system). +.PP +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. +.PP +Use \f[C]rclone\ dedupe\f[] to fix duplicated files. +.SS Limitations +.PP +This backend uses the go\-mega go +library (https://github.com/t3rm1n4l/go-mega) which is an opensource go +library implementing the Mega API. +There doesn\[aq]t appear to be any documentation for the mega protocol +beyond the mega C++ SDK (https://github.com/meganz/sdk) source code so +there are likely quite a few errors still remaining in this library. +.PP +Mega allows duplicate files which may confuse rclone. .SS Microsoft Azure Blob Storage .PP Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] @@ -10782,8 +11450,11 @@ OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. .PP -One drive supports SHA1 type hashes, so you can use -\f[C]\-\-checksum\f[] flag. +OneDrive personal supports SHA1 type hashes. +OneDrive for business and Sharepoint Server support +QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash). +.PP +For all types of OneDrive you can use the \f[C]\-\-checksum\f[] flag. .SS Deleting files .PP Any files you delete with rclone will end up in the trash. @@ -11313,6 +11984,21 @@ rclone\ lsd\ myremote: This remote supports \f[C]\-\-fast\-list\f[] which allows you to use fewer transactions in exchange for more memory. See the rclone docs (/docs/#fast-list) for more details. +.SS \-\-update and \-\-use\-server\-modtime +.PP +As noted below, the modified time is stored on metadata on the object. +It is used by default for all operations that require checking the time +a file was last updated. +It allows rclone to treat the remote more like a true filesystem, but it +is inefficient because it requires an extra API call to retrieve the +metadata. +.PP +For many operations, the time the object was last uploaded to the remote +is sufficient to determine if it is "dirty". +By using \f[C]\-\-update\f[] along with +\f[C]\-\-use\-server\-modtime\f[], you can avoid the extra API call and +simply upload files whose local modtime is newer than the time it was +last uploaded. .SS Specific options .PP Here are the command line options specific to this cloud storage system. @@ -11504,13 +12190,17 @@ Your subscription level will determine how long items stay in the trash. SFTP is the Secure (or SSH) File Transfer Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). .PP -It runs over SSH v2 and is standard with most modern SSH installations. +SFTP runs over SSH v2 and is installed as standard with most modern SSH +installations. .PP Paths are specified as \f[C]remote:path\f[]. If the path does not begin with a \f[C]/\f[] it is relative to the home directory of the user. An empty path \f[C]remote:\f[] refers to the user\[aq]s home directory. .PP +Note that some SFTP servers will need the leading \f[C]/\f[] \- Synology +is a good example of this. +.PP Here is an example of making an SFTP configuration. First run .IP @@ -11693,11 +12383,17 @@ disable this behaviour. SFTP supports checksums if the same login has shell access and \f[C]md5sum\f[] or \f[C]sha1sum\f[] as well as \f[C]echo\f[] are in the remote\[aq]s PATH. -This remote check can be disabled by setting the configuration option -\f[C]disable_hashcheck\f[]. -This may be required if you\[aq]re connecting to SFTP servers which are -not under your control, and to which the execution of remote commands is -prohibited. +This remote checksumming (file hashing) is recommended and enabled by +default. +Disabling the checksumming may be required if you are connecting to SFTP +servers which are not under your control, and to which the execution of +remote commands is prohibited. +Set the configuration option \f[C]disable_hashcheck\f[] to \f[C]true\f[] +to disable checksumming. +.PP +Note that some SFTP servers (eg Synology) the paths are different for +SSH and SFTP so the hashes can\[aq]t be calculated properly. +For them using \f[C]disable_hashcheck\f[] is a good idea. .PP The only ssh agent supported under Windows is Putty\[aq]s pageant. .PP @@ -11803,7 +12499,9 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "nextcloud" \ 2\ /\ Owncloud \ \ \ \\\ "owncloud" -\ 3\ /\ Other\ site/service\ or\ software +\ 3\ /\ Sharepoint +\ \ \ \\\ "sharepoint" +\ 4\ /\ Other\ site/service\ or\ software \ \ \ \\\ "other" vendor>\ 1 User\ name @@ -11908,6 +12606,51 @@ write to the mount. .PP For more help see the put.io webdav docs (http://help.put.io/apps-and-integrations/ftp-and-webdav). +.SS Sharepoint +.PP +Can be used with Sharepoint provided by OneDrive for Business or +Office365 Education Accounts. +This feature is only needed for a few of these Accounts, mostly +Office365 Education ones. +These accounts are sometimes not verified by the domain owner +github#1975 (https://github.com/ncw/rclone/issues/1975) +.PP +This means that these accounts can\[aq]t be added using the official API +(other Accounts should work with the "onedrive" option). +However, it is possible to access them using webdav. +.PP +To use a sharepoint remote with rclone, add it like this: First, you +need to get your remote\[aq]s URL: +.IP \[bu] 2 +Go here (https://onedrive.live.com/about/en-us/signin/) to open your +OneDrive or to sign in +.IP \[bu] 2 +Now take a look at your address bar, the URL should look like this: +\f[C]https://[YOUR\-DOMAIN]\-my.sharepoint.com/personal/[YOUR\-EMAIL]/_layouts/15/onedrive.aspx\f[] +.PP +You\[aq]ll only need this URL upto the email address. +After that, you\[aq]ll most likely want to add "/Documents". +That subdirectory contains the actual data stored on your OneDrive. +.PP +Add the remote to rclone like this: Configure the \f[C]url\f[] as +\f[C]https://[YOUR\-DOMAIN]\-my.sharepoint.com/personal/[YOUR\-EMAIL]/Documents\f[] +and use your normal account email and password for \f[C]user\f[] and +\f[C]pass\f[]. +If you have 2FA enabled, you have to generate an app password. +Set the \f[C]vendor\f[] to \f[C]sharepoint\f[]. +.PP +Your config file should look like this: +.IP +.nf +\f[C] +[sharepoint] +type\ =\ webdav +url\ =\ https://[YOUR\-DOMAIN]\-my.sharepoint.com/personal/[YOUR\-EMAIL]/Documents +vendor\ =\ other +user\ =\ YourEmailAddress +pass\ =\ encryptedpassword +\f[] +.fi .SS Yandex Disk .PP Yandex Disk (https://disk.yandex.com) is a cloud storage solution @@ -12187,6 +12930,18 @@ $\ rclone\ \-L\ ls\ /tmp/a \ \ \ \ \ \ \ \ 6\ b/one \f[] .fi +.SS \-\-local\-no\-check\-updated +.PP +Don\[aq]t check to see if the files change during upload. +.PP +Normally rclone checks the size and modification time of files as they +are being uploaded and aborts with a message which starts +\f[C]can\[aq]t\ copy\ \-\ source\ file\ is\ being\ updated\f[] if the +file changes during upload. +.PP +However on some file systems this modification time check may fail (eg +Glusterfs #2206 (https://github.com/ncw/rclone/issues/2206)) so this +check can be disabled with this flag. .SS \-\-local\-no\-unicode\-normalization .PP This flag is deprecated now. @@ -12246,6 +13001,201 @@ This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped. .SS Changelog .IP \[bu] 2 +v1.41 \- 2018\-04\-28 +.RS 2 +.IP \[bu] 2 +New backends +.IP \[bu] 2 +Mega support added +.IP \[bu] 2 +Webdav now supports SharePoint cookie authentication (hensur) +.IP \[bu] 2 +New commands +.IP \[bu] 2 +link: create public link to files and folders (Stefan Breunig) +.IP \[bu] 2 +about: gets quota info from a remote (a\-roussos, ncw) +.IP \[bu] 2 +hashsum: a generic tool for any hash to produce md5sum like output +.IP \[bu] 2 +New Features +.IP \[bu] 2 +lsd: Add \-R flag and fix and update docs for all ls commands +.IP \[bu] 2 +ncdu: added a "refresh" key \- CTRL\-L (Keith Goldfarb) +.IP \[bu] 2 +serve restic: Add append\-only mode (Steve Kriss) +.IP \[bu] 2 +serve restic: Disallow overwriting files in append\-only mode (Alexander +Neumann) +.IP \[bu] 2 +serve restic: Print actual listener address (Matt Holt) +.IP \[bu] 2 +size: Add \-\-json flag (Matthew Holt) +.IP \[bu] 2 +sync: implement \-\-ignore\-errors (Mateusz Pabian) +.IP \[bu] 2 +dedupe: Add dedupe largest functionality (Richard Yang) +.IP \[bu] 2 +fs: Extend SizeSuffix to include TB and PB for rclone about +.IP \[bu] 2 +fs: add \-\-dump goroutines and \-\-dump openfiles for debugging +.IP \[bu] 2 +rc: implement core/memstats to print internal memory usage info +.IP \[bu] 2 +rc: new call rc/pid (Michael P. +Dubner) +.IP \[bu] 2 +Compile +.IP \[bu] 2 +Drop support for go1.6 +.IP \[bu] 2 +Release +.IP \[bu] 2 +Fix \f[C]make\ tarball\f[] (Chih\-Hsuan Yen) +.IP \[bu] 2 +Bug Fixes +.IP \[bu] 2 +filter: fix \-\-min\-age and \-\-max\-age together check +.IP \[bu] 2 +fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport +.IP \[bu] 2 +lsd,lsf: make sure all times we output are in local time +.IP \[bu] 2 +rc: fix setting bwlimit to unlimited +.IP \[bu] 2 +rc: take note of the \-\-rc\-addr flag too as per the docs +.IP \[bu] 2 +Mount +.IP \[bu] 2 +Use About to return the correct disk total/used/free (eg in \f[C]df\f[]) +.IP \[bu] 2 +Set \f[C]\-\-attr\-timeout\ default\f[] to \f[C]1s\f[] \- fixes: +.RS 2 +.IP \[bu] 2 +rclone using too much memory +.IP \[bu] 2 +rclone not serving files to samba +.IP \[bu] 2 +excessive time listing directories +.RE +.IP \[bu] 2 +Fix \f[C]df\ \-i\f[] (upstream fix) +.IP \[bu] 2 +VFS +.IP \[bu] 2 +Filter files \f[C]\&.\f[] and \f[C]\&..\f[] from directory listing +.IP \[bu] 2 +Only make the VFS cache if \-\-vfs\-cache\-mode > Off +.IP \[bu] 2 +Local +.IP \[bu] 2 +Add \-\-local\-no\-check\-updated to disable updated file checks +.IP \[bu] 2 +Retry remove on Windows sharing violation error +.IP \[bu] 2 +Cache +.IP \[bu] 2 +Flush the memory cache after close +.IP \[bu] 2 +Purge file data on notification +.IP \[bu] 2 +Always forget parent dir for notifications +.IP \[bu] 2 +Integrate with Plex websocket +.IP \[bu] 2 +Add rc cache/stats (seuffert) +.IP \[bu] 2 +Add info log on notification +.IP \[bu] 2 +Box +.IP \[bu] 2 +Fix failure reading large directories \- parse file/directory size as +float +.IP \[bu] 2 +Dropbox +.IP \[bu] 2 +Fix crypt+obfuscate on dropbox +.IP \[bu] 2 +Fix repeatedly uploading the same files +.IP \[bu] 2 +FTP +.IP \[bu] 2 +Work around strange response from box FTP server +.IP \[bu] 2 +More workarounds for FTP servers to fix mkParentDir error +.IP \[bu] 2 +Fix no error on listing non\-existent directory +.IP \[bu] 2 +Google Cloud Storage +.IP \[bu] 2 +Add service_account_credentials (Matt Holt) +.IP \[bu] 2 +Detect bucket presence by listing it \- minimises permissions needed +.IP \[bu] 2 +Ignore zero length directory markers +.IP \[bu] 2 +Google Drive +.IP \[bu] 2 +Add service_account_credentials (Matt Holt) +.IP \[bu] 2 +Fix directory move leaving a hardlinked directory behind +.IP \[bu] 2 +Return proper google errors when Opening files +.IP \[bu] 2 +When initialized with a filepath, optional features used incorrect root +path (Stefan Breunig) +.IP \[bu] 2 +HTTP +.IP \[bu] 2 +Fix sync for servers which don\[aq]t return Content\-Length in HEAD +.IP \[bu] 2 +Onedrive +.IP \[bu] 2 +Add QuickXorHash support for OneDrive for business +.IP \[bu] 2 +Fix socket leak in multipart session upload +.IP \[bu] 2 +S3 +.IP \[bu] 2 +Look in S3 named profile files for credentials +.IP \[bu] 2 +Add \f[C]\-\-s3\-disable\-checksum\f[] to disable checksum uploading +(Chris Redekop) +.IP \[bu] 2 +Hierarchical configuration support (Giri Badanahatti) +.IP \[bu] 2 +Add in config for all the supported S3 providers +.IP \[bu] 2 +Add One Zone Infrequent Access storage class (Craig Rachel) +.IP \[bu] 2 +Add \-\-use\-server\-modtime support (Peter Baumgartner) +.IP \[bu] 2 +Add \-\-s3\-chunk\-size option to control multipart uploads +.IP \[bu] 2 +Ignore zero length directory markers +.IP \[bu] 2 +SFTP +.IP \[bu] 2 +Update docs to match code, fix typos and clarify disable_hashcheck +prompt (Michael G. +Noll) +.IP \[bu] 2 +Update docs with Synology quirks +.IP \[bu] 2 +Fail soft with a debug on hash failure +.IP \[bu] 2 +Swift +.IP \[bu] 2 +Add \-\-use\-server\-modtime support (Peter Baumgartner) +.IP \[bu] 2 +Webdav +.IP \[bu] 2 +Support SharePoint cookie authentication (hensur) +.IP \[bu] 2 +Strip leading and trailing / off root +.RE +.IP \[bu] 2 v1.40 \- 2018\-03\-19 .RS 2 .IP \[bu] 2 @@ -14993,6 +15943,7 @@ Zhiming Wang Andy Pilate .IP \[bu] 2 Oliver Heyme + .IP \[bu] 2 wuyu .IP \[bu] 2 @@ -15061,6 +16012,7 @@ lewapm <32110057+lewapm@users.noreply.github.com> Yassine Imounachen .IP \[bu] 2 Chris Redekop + .IP \[bu] 2 Jon Fautley .IP \[bu] 2 @@ -15096,6 +16048,44 @@ wolfv Dave Pedu .IP \[bu] 2 Stefan Lindblom +.IP \[bu] 2 +seuffert +.IP \[bu] 2 +gbadanahatti <37121690+gbadanahatti@users.noreply.github.com> +.IP \[bu] 2 +Keith Goldfarb +.IP \[bu] 2 +Steve Kriss +.IP \[bu] 2 +Chih\-Hsuan Yen +.IP \[bu] 2 +Alexander Neumann +.IP \[bu] 2 +Matt Holt +.IP \[bu] 2 +Eri Bastos +.IP \[bu] 2 +Michael P. +Dubner +.IP \[bu] 2 +Antoine GIRARD +.IP \[bu] 2 +Mateusz Piotrowski +.IP \[bu] 2 +Animosity022 +.IP \[bu] 2 +Peter Baumgartner +.IP \[bu] 2 +Craig Rachel +.IP \[bu] 2 +Michael G. +Noll +.IP \[bu] 2 +hensur +.IP \[bu] 2 +Oliver Heyme +.IP \[bu] 2 +Richard Yang .SH Contact the rclone project .SS Forum .PP