From bbbc202ee6f9820bce4043ebe2b5f042c58b157b Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Thu, 15 Jun 2017 20:12:26 +0100 Subject: [PATCH] Add ftp.md to docs builder and update docs --- MANUAL.html | 1154 +++++++++-------- MANUAL.md | 819 +++++++++--- MANUAL.txt | 739 +++++++++-- bin/make_manual.py | 1 + docs/content/commands/rclone.md | 20 +- docs/content/commands/rclone_authorize.md | 11 +- docs/content/commands/rclone_cat.md | 13 +- docs/content/commands/rclone_check.md | 13 +- docs/content/commands/rclone_cleanup.md | 11 +- docs/content/commands/rclone_config.md | 11 +- docs/content/commands/rclone_copy.md | 11 +- docs/content/commands/rclone_copyto.md | 11 +- docs/content/commands/rclone_cryptcheck.md | 11 +- docs/content/commands/rclone_dbhashsum.md | 116 ++ docs/content/commands/rclone_dedupe.md | 13 +- docs/content/commands/rclone_delete.md | 11 +- .../commands/rclone_genautocomplete.md | 11 +- docs/content/commands/rclone_gendocs.md | 19 +- docs/content/commands/rclone_listremotes.md | 13 +- docs/content/commands/rclone_ls.md | 11 +- docs/content/commands/rclone_lsd.md | 11 +- docs/content/commands/rclone_lsjson.md | 144 ++ docs/content/commands/rclone_lsl.md | 11 +- docs/content/commands/rclone_md5sum.md | 11 +- docs/content/commands/rclone_mkdir.md | 11 +- docs/content/commands/rclone_mount.md | 57 +- docs/content/commands/rclone_move.md | 11 +- docs/content/commands/rclone_moveto.md | 11 +- docs/content/commands/rclone_ncdu.md | 135 ++ docs/content/commands/rclone_obscure.md | 11 +- docs/content/commands/rclone_purge.md | 11 +- docs/content/commands/rclone_rmdir.md | 11 +- docs/content/commands/rclone_rmdirs.md | 11 +- docs/content/commands/rclone_sha1sum.md | 11 +- docs/content/commands/rclone_size.md | 11 +- docs/content/commands/rclone_sync.md | 11 +- docs/content/commands/rclone_version.md | 11 +- rclone.1 | 930 ++++++++++--- 38 files changed, 3305 insertions(+), 1134 deletions(-) create mode 100644 docs/content/commands/rclone_dbhashsum.md create mode 100644 docs/content/commands/rclone_lsjson.md create mode 100644 docs/content/commands/rclone_ncdu.md diff --git a/MANUAL.html b/MANUAL.html index 231384eb1..f07d0007b 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,10 +12,10 @@

Rclone

-

Logo

+

Logo

Rclone is a command line program to sync files and directories to and from

Features

@@ -36,34 +37,34 @@
  • MD5/SHA1 hashes checked at all times for file integrity
  • Timestamps preserved on files
  • Partial syncs supported on a whole file basis
  • -
  • Copy mode to just copy new/changed files
  • -
  • Sync (one way) mode to make a directory identical
  • -
  • Check mode to check for file hash equality
  • +
  • Copy mode to just copy new/changed files
  • +
  • Sync (one way) mode to make a directory identical
  • +
  • Check mode to check for file hash equality
  • Can sync to and from network, eg two different cloud accounts
  • -
  • Optional encryption (Crypt)
  • -
  • Optional FUSE mount (rclone mount)
  • +
  • Optional encryption (Crypt)
  • +
  • Optional FUSE mount (rclone mount)
  • Links

    Install

    Rclone is a Go program and comes as a single binary file.

    Quickstart

    See below for some expanded Linux / macOS instructions.

    -

    See the Usage section of the docs for how to use rclone, or run rclone -h.

    +

    See the Usage section of the docs for how to use rclone, or run rclone -h.

    Linux installation from precompiled binary

    Fetch and unpack

    -
    curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip
    +
    curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
     unzip rclone-current-linux-amd64.zip
     cd rclone-*-linux-amd64

    Copy binary file

    @@ -74,21 +75,21 @@ sudo chmod 755 /usr/bin/rclone
    sudo mkdir -p /usr/local/share/man/man1
     sudo cp rclone.1 /usr/local/share/man/man1/
     sudo mandb 
    -

    Run rclone config to setup. See rclone config docs for more details.

    +

    Run rclone config to setup. See rclone config docs for more details.

    rclone config

    macOS installation from precompiled binary

    Download the latest version of rclone.

    -
    cd && curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip
    +
    cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip

    Unzip the download and cd to the extracted folder.

    unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64

    Move rclone to your $PATH. You will be prompted for your password.

    sudo mv rclone /usr/local/bin/

    Remove the leftover files.

    cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
    -

    Run rclone config to setup. See rclone config docs for more details.

    +

    Run rclone config to setup. See rclone config docs for more details.

    rclone config

    Install from source

    -

    Make sure you have at least Go 1.5 installed. Make sure your GOPATH is set, then:

    +

    Make sure you have at least Go 1.6 installed. Make sure your GOPATH is set, then:

    go get -u -v github.com/ncw/rclone

    and this will build the binary in $GOPATH/bin. If you have built rclone before then you will want to update its dependencies first with this

    go get -u -v github.com/ncw/rclone/...
    @@ -107,7 +108,7 @@ sudo mandb

    See below for how to install snapd if it isn't already installed

    Arch

    @@ -129,7 +130,7 @@ sudo dnf install snapd

    OpenEmbedded/Yocto

    Install the snap meta layer.

    openSUSE

    -
    sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy
    +
    sudo zypper addrepo https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy
     sudo zypper install snapd

    OpenWrt

    Enable the snap-openwrt feed.

    @@ -139,19 +140,20 @@ sudo zypper install snapd
    rclone config

    See the following for detailed instructions for

    Usage

    Rclone syncs a directory tree from one storage system to another.

    @@ -239,7 +241,7 @@ rclone --dry-run --min-size 100M delete remote:path

    Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.

    If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.

    If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.

    -
    rclone check source:path dest:path
    +
    rclone check source:path dest:path [flags]

    Options

          --download   Check by downloading rather than with hash.

    rclone ls

    @@ -342,7 +344,7 @@ two-3.txt: renamed from: two.txt
    rclone dedupe --dedupe-mode rename "drive:Google Photos"

    Or

    rclone dedupe rename "drive:Google Photos"
    -
    rclone dedupe [mode] remote:path
    +
    rclone dedupe [mode] remote:path [flags]

    Options

          --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")

    rclone authorize

    @@ -361,7 +363,7 @@ two-3.txt: renamed from: two.txt

    Or like this to output any .txt files in dir or subdirectories.

    rclone --include "*.txt" cat remote:path/to/dir

    Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.

    -
    rclone cat remote:path
    +
    rclone cat remote:path [flags]

    Options

          --count int    Only print N characters. (default -1)
           --discard      Discard the output instead of printing.
    @@ -396,9 +398,14 @@ if src is directory
     
    rclone cryptcheck remote:path encryptedremote:path

    After it has run it will log the status of the encryptedremote:.

    rclone cryptcheck remote:path cryptedremote:path
    +

    rclone dbhashsum

    +

    Produces a Dropbbox hash file for all the objects in the path.

    +

    Synopsis

    +

    Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.

    +
    rclone dbhashsum remote:path

    rclone genautocomplete

    Output bash completion script for rclone.

    -

    Synopsis

    +

    Synopsis

    Generates a bash shell autocompletion script for rclone.

    This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg

    sudo rclone genautocomplete
    @@ -408,31 +415,48 @@ if src is directory
    rclone genautocomplete [output_file]

    rclone gendocs

    Output markdown docs for rclone to the directory supplied.

    -

    Synopsis

    +

    Synopsis

    This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

    -
    rclone gendocs output_directory
    +
    rclone gendocs output_directory [flags]
    +

    Options

    +
      -h, --help   help for gendocs

    rclone listremotes

    List all the remotes in the config file.

    -

    Synopsis

    +

    Synopsis

    rclone listremotes lists all the available remotes from the config file.

    When uses with the -l flag it lists the types too.

    -
    rclone listremotes
    -

    Options

    +
    rclone listremotes [flags]
    +

    Options

      -l, --long   Show the type as well as names.
    +

    rclone lsjson

    +

    List directories and objects in the path in JSON format.

    +

    Synopsis

    +

    List directories and objects in the path in JSON format.

    +

    The output is an array of Items, where each Item looks like this

    +

    { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }

    +

    If --hash is not specified the the Hashes property won't be emitted.

    +

    If --no-modtime is specified then ModTime will be blank.

    +

    The time is in RFC3339 format with nanosecond precision.

    +

    The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.

    +
    rclone lsjson remote:path [flags]
    +

    Options

    +
          --hash         Include hashes in the output (may take longer).
    +      --no-modtime   Don't read the modification time (can speed things up).
    +  -R, --recursive    Recurse into the listing.

    rclone mount

    Mount the remote as a mountpoint. EXPERIMENTAL

    -

    Synopsis

    +

    Synopsis

    rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.

    This is EXPERIMENTAL - use with care.

    First set up your remote using rclone config. Check it works with rclone ls etc.

    -

    Start the mount like this (note the & on the end to put rclone in the background).

    -
    rclone mount remote:path/to/files /path/to/local/mount &
    -

    Stop the mount with

    -
    fusermount -u /path/to/local/mount
    -

    Or if that fails try

    -
    fusermount -z -u /path/to/local/mount
    -

    Or with OS X

    -
    umount /path/to/local/mount
    +

    Start the mount like this

    +
    rclone mount remote:path/to/files /path/to/local/mount
    +

    When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped.

    +

    The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually with

    +
    # Linux
    +fusermount -u /path/to/local/mount
    +# OS X
    +umount /path/to/local/mount

    Limitations

    This can only write files seqentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount.

    The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.

    @@ -441,6 +465,10 @@ if src is directory

    File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.

    Filters

    Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

    +

    Directory Cache

    +

    Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.

    +

    Alternatively, you can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    +
    kill -SIGHUP $(pidof rclone)

    Bugs

    • All the remotes should work for read, but some may not for write @@ -450,14 +478,8 @@ if src is directory
    • Or put in an an upload cache to cache the files on disk first
    -

    TODO

    -
      -
    • Check hashes on upload/download
    • -
    • Preserve timestamps
    • -
    • Move directories
    • -
    -
    rclone mount remote:path /path/to/mountpoint
    -

    Options

    +
    rclone mount remote:path /path/to/mountpoint [flags]
    +

    Options

          --allow-non-empty           Allow mounting over a non-empty directory.
           --allow-other               Allow access to other users.
           --allow-root                Allow access to root user.
    @@ -466,15 +488,17 @@ if src is directory
           --dir-cache-time duration   Time to cache directory entries for. (default 5m0s)
           --gid uint32                Override the gid field set by the filesystem. (default 502)
           --max-read-ahead int        The number of bytes that can be prefetched for sequential reads. (default 128k)
    -      --no-modtime                Don't read the modification time (can speed things up).
    +      --no-checksum               Don't compare checksums on up/download.
    +      --no-modtime                Don't read/write the modification time (can speed things up).
           --no-seek                   Don't allow seeking in files.
    +      --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
           --read-only                 Mount read-only.
           --uid uint32                Override the uid field set by the filesystem. (default 502)
           --umask int                 Override the permission bits set by the filesystem. (default 2)
           --write-back-cache          Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

    rclone moveto

    Move file or directory from source to dest.

    -

    Synopsis

    +

    Synopsis

    If source:path is a file or directory then it moves it to a file or directory named dest:path.

    This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.

    So

    @@ -489,14 +513,30 @@ if src is directory

    This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

    Important: Since this can cause data loss, test first with the --dry-run flag.

    rclone moveto source:path dest:path
    +

    rclone ncdu

    +

    Explore a remote with a text based user interface.

    +

    Synopsis

    +

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

    +

    To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

    +

    Here are the keys - press '?' to toggle the help on and off

    +
     ↑,↓ or k,j to Move
    + →,l to enter
    + ←,h to return
    + c toggle counts
    + g toggle graph
    + n,s,C sort by name,size,count
    + ? to toggle help on and off
    + q/ESC/c-C to quit
    +

    This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands.

    +
    rclone ncdu remote:path

    rclone obscure

    Obscure password for use in the rclone.conf

    -

    Synopsis

    +

    Synopsis

    Obscure password for use in the rclone.conf

    rclone obscure password

    rclone rmdirs

    Remove any empty directoryies under the path.

    -

    Synopsis

    +

    Synopsis

    This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.

    This is useful for tidying up remotes that rclone has left a lot of empty directories in.

    rclone rmdirs remote:path
    @@ -536,7 +576,7 @@ if src is directory

    This can be used when scripting to make aged backups efficiently, eg

    rclone sync remote:current-backup remote:previous-backup
     rclone sync /path/to/files remote:current-backup
    -

    Options

    +

    Options

    Rclone has a number of options to control its behaviour.

    Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

    Options which use SIZE use kByte by default. However a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

    @@ -556,7 +596,7 @@ rclone sync /path/to/files remote:current-backup

    An example of a typical timetable to avoid link saturation during daytime working hours could be:

    --bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"

    In this example, the transfer bandwidth will be set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.

    -

    Bandwidth limits only apply to the data transfer. The don't apply to the bandwith of the directory listings etc.

    +

    Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc.

    Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M parameter for rclone.

    --buffer-size=SIZE

    Use this sized buffer to speed up file transfers. Each --transfer will use this much memory for buffering.

    @@ -567,7 +607,7 @@ rclone sync /path/to/files remote:current-backup

    -c, --checksum

    Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.

    This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.

    -

    This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.

    +

    This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.

    Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.

    When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.

    --config=CONFIG_FILE

    @@ -663,7 +703,20 @@ rclone sync /path/to/files remote:current-backup

    This option allows you to specify when files on your destination are deleted when you sync folders.

    Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.

    Specifying --delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory.

    -

    Specifying --delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transfered. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed sucessfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors.

    +

    Specifying --delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors.

    +

    --fast-list

    +

    When doing anything which involves a directory listing (eg sync, copy, ls - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.

    +

    However some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg s3, b2, gcs, swift, hubic).

    +

    If you use the --fast-list flag then rclone will use this method for listing directories. This will have the following consequences for the listing:

    + +

    rclone should always give identical results with and without --fast-list.

    +

    If you pay for transactions and can fit your entire sync listing into memory then --fast-list is recommended. If you have a very big sync to do then don't use --fast-list otherwise you will run out of memory.

    +

    If you use --fast-list on a remote which doesn't support it, then rclone will just ignore it.

    --timeout=TIME

    This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.

    The default is 5m. Set to 0 to disable.

    @@ -673,7 +726,7 @@ rclone sync /path/to/files remote:current-backup

    -u, --update

    This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.

    If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.

    -

    On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remoes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.

    +

    On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.

    This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.

    -v, -vv, --verbose

    With -v rclone will tell you about each file that is transferred and a small number of significant events.

    @@ -720,7 +773,7 @@ c/u/q> read -s RCLONE_CONFIG_PASS export RCLONE_CONFIG_PASS -

    Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the envonment variable.

    +

    Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the environment variable.

    If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password.

    Developer options

    These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question.

    @@ -744,7 +797,7 @@ export RCLONE_CONFIG_PASS

    --no-traverse

    The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync.

    If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.

    -

    However if you are copying a large number of files, escpecially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse.

    +

    However if you are copying a large number of files, especially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse.

    It can also be used to reduce the memory usage of rclone when copying - rclone --no-traverse copy src dst won't load either the source or destination listings into memory so will use the minimum amount of memory.

    Filtering

    For the filtering options

    @@ -763,7 +816,7 @@ export RCLONE_CONFIG_PASS
  • --max-age
  • --dump-filters
  • -

    See the filtering section.

    +

    See the filtering section.

    Logging

    rclone has 4 levels of logging, Error, Notice, Info and Debug.

    By default rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls).

    @@ -778,10 +831,10 @@ export RCLONE_CONFIG_PASS

    Exit Code

    If any errors occurred during the command, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.

    During the startup phase rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.

    -

    When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visibile with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.

    +

    When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.

    Environment Variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    -

    Options

    +

    Options

    Every option in rclone can have its default set by environment variable.

    To find the name of the environment variable, first take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

    For example to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

    @@ -1073,7 +1126,7 @@ user2/stuff - + @@ -1083,7 +1136,7 @@ user2/stuff - + @@ -1091,7 +1144,7 @@ user2/stuff - + @@ -1099,7 +1152,7 @@ user2/stuff - + @@ -1107,15 +1160,15 @@ user2/stuff - + + + + + - - - - - + @@ -1123,7 +1176,7 @@ user2/stuff - + @@ -1131,7 +1184,7 @@ user2/stuff - + @@ -1139,7 +1192,7 @@ user2/stuff - + @@ -1147,7 +1200,7 @@ user2/stuff - + @@ -1155,7 +1208,7 @@ user2/stuff - + @@ -1163,7 +1216,7 @@ user2/stuff - + @@ -1171,7 +1224,15 @@ user2/stuff - + + + + + + + + + @@ -1184,6 +1245,7 @@ user2/stuff

    The cloud storage system supports various hash types of the objects.
    The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

    To use the checksum checks between filesystems they must support a common hash type.

    +

    † Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.

    ModTime

    The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

    All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.

    @@ -1210,110 +1272,132 @@ The hashes are used when transferring data as an integrity check and can be spec
    NameName Hash ModTime Case Insensitive
    Google DriveGoogle Drive MD5 Yes No R/W
    Amazon S3Amazon S3 MD5 Yes No R/W
    Openstack SwiftOpenstack Swift MD5 Yes No R/W
    DropboxDropboxDBHASH †YesYesNo -NoYesNoR
    Google Cloud StorageGoogle Cloud Storage MD5 Yes No R/W
    Amazon DriveAmazon Drive MD5 No Yes R
    Microsoft One DriveMicrosoft OneDrive SHA1 Yes Yes R
    HubicHubic MD5 Yes No R/W
    Backblaze B2Backblaze B2 SHA1 Yes No R/W
    Yandex DiskYandex Disk MD5 Yes No R/W
    SFTPSFTP - Yes Depends -
    The local filesystemFTP-NoYesNo-
    The local filesystem All Yes Depends
    - + + - + + - + + - + + - + + - + + - + + - + + - + + - + + - + + - + + - + + + + + + + + + + +
    NameName Purge Copy Move DirMove CleanUpListR
    Google DriveGoogle Drive Yes Yes Yes Yes No #575No
    Amazon S3Amazon S3 No Yes No No NoYes
    Openstack SwiftOpenstack Swift Yes † Yes No No NoYes
    DropboxDropbox Yes Yes Yes Yes No #575No
    Google Cloud StorageGoogle Cloud Storage Yes Yes No No NoYes
    Amazon DriveAmazon Drive Yes No Yes Yes No #575No
    Microsoft One DriveMicrosoft OneDrive Yes Yes Yes No #197 No #575No
    HubicHubic Yes † Yes No No NoYes
    Backblaze B2Backblaze B2 No No No No YesYes
    Yandex DiskYandex Disk Yes No No No No #575Yes
    SFTPSFTP No No Yes Yes NoNo
    The local filesystemFTPNoNoYesYesNoNo
    The local filesystem Yes No Yes Yes NoNo
    @@ -1331,6 +1415,8 @@ The hashes are used when transferring data as an integrity check and can be spec

    CleanUp

    This is used for emptying the trash for a remote by rclone cleanup.

    If the server can't do CleanUp then rclone cleanup will return an error.

    +

    ListR

    +

    The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details.

    Google Drive

    Paths are specified as drive:path

    Drive paths may be as deep as required, eg drive:directory/subdirectory.

    @@ -1338,10 +1424,13 @@ The hashes are used when transferring data as an integrity check and can be spec

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    -
    n) New remote
    -d) Delete remote
    +
    No remotes found - make a new one
    +n) New remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
     q) Quit config
    -e/n/d/q> n
    +n/r/c/s/q> n
     name> remote
     Type of storage to configure.
     Choose a number from below, or type in your own value
    @@ -1355,27 +1444,29 @@ Choose a number from below, or type in your own value
        \ "dropbox"
      5 / Encrypt/Decrypt a remote
        \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    + 6 / FTP Connection
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
        \ "google cloud storage"
    - 7 / Google Drive
    + 8 / Google Drive
        \ "drive"
    - 8 / Hubic
    + 9 / Hubic
        \ "hubic"
    - 9 / Local Disk
    +10 / Local Disk
        \ "local"
    -10 / Microsoft OneDrive
    +11 / Microsoft OneDrive
        \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
        \ "swift"
    -12 / SSH/SFTP Connection
    +13 / SSH/SFTP Connection
        \ "sftp"
    -13 / Yandex Disk
    +14 / Yandex Disk
        \ "yandex"
    -Storage> 7
    +Storage> 8
     Google Application Client Id - leave blank normally.
    -client_id>
    +client_id> 
     Google Application Client Secret - leave blank normally.
    -client_secret>
    +client_secret> 
     Remote config
     Use auto config?
      * Say Y if not sure
    @@ -1387,10 +1478,14 @@ If your browser doesn't open automatically go to the following link: http://
     Log in and authorize rclone for access
     Waiting for code...
     Got code
    +Configure this as a team drive?
    +y) Yes
    +n) No
    +y/n> n
     --------------------
     [remote]
    -client_id =
    -client_secret =
    +client_id = 
    +client_secret = 
     token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
     --------------------
     y) Yes this is OK
    @@ -1405,6 +1500,34 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to a drive directory called backup

    rclone copy /home/source remote:backup
    +

    Team drives

    +

    If you want to configure the remote to point to a Google Team Drive then answer y to the question Configure this as a team drive?.

    +

    This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.

    +

    For example:

    +
    Configure this as a team drive?
    +y) Yes
    +n) No
    +y/n> y
    +Fetching team drive list...
    +Choose a number from below, or type in your own value
    + 1 / Rclone Test
    +   \ "xxxxxxxxxxxxxxxxxxxx"
    + 2 / Rclone Test 2
    +   \ "yyyyyyyyyyyyyyyyyyyy"
    + 3 / Rclone Test 3
    +   \ "zzzzzzzzzzzzzzzzzzzz"
    +Enter a Team Drive ID> 1
    +--------------------
    +[remote]
    +client_id = 
    +client_secret = 
    +token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
    +team_drive = xxxxxxxxxxxxxxxxxxxx
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y

    Modified time

    Google drive stores modification times accurate to 1 ms.

    Revisions

    @@ -1445,111 +1568,111 @@ y/e/d> y
    -Extension -Mime Type -Description +Extension +Mime Type +Description -csv -text/csv -Standard CSV format for Spreadsheets +csv +text/csv +Standard CSV format for Spreadsheets -doc -application/msword -Micosoft Office Document +doc +application/msword +Micosoft Office Document -docx -application/vnd.openxmlformats-officedocument.wordprocessingml.document -Microsoft Office Document +docx +application/vnd.openxmlformats-officedocument.wordprocessingml.document +Microsoft Office Document -epub -application/epub+zip -E-book format +epub +application/epub+zip +E-book format -html -text/html -An HTML Document +html +text/html +An HTML Document -jpg -image/jpeg -A JPEG Image File +jpg +image/jpeg +A JPEG Image File -odp -application/vnd.oasis.opendocument.presentation -Openoffice Presentation +odp +application/vnd.oasis.opendocument.presentation +Openoffice Presentation -ods -application/vnd.oasis.opendocument.spreadsheet -Openoffice Spreadsheet +ods +application/vnd.oasis.opendocument.spreadsheet +Openoffice Spreadsheet -ods -application/x-vnd.oasis.opendocument.spreadsheet -Openoffice Spreadsheet +ods +application/x-vnd.oasis.opendocument.spreadsheet +Openoffice Spreadsheet -odt -application/vnd.oasis.opendocument.text -Openoffice Document +odt +application/vnd.oasis.opendocument.text +Openoffice Document -pdf -application/pdf -Adobe PDF Format +pdf +application/pdf +Adobe PDF Format -png -image/png -PNG Image Format +png +image/png +PNG Image Format -pptx -application/vnd.openxmlformats-officedocument.presentationml.presentation -Microsoft Office Powerpoint +pptx +application/vnd.openxmlformats-officedocument.presentationml.presentation +Microsoft Office Powerpoint -rtf -application/rtf -Rich Text Format +rtf +application/rtf +Rich Text Format -svg -image/svg+xml -Scalable Vector Graphics Format +svg +image/svg+xml +Scalable Vector Graphics Format -tsv -text/tab-separated-values -Standard TSV format for spreadsheets +tsv +text/tab-separated-values +Standard TSV format for spreadsheets -txt -text/plain -Plain Text +txt +text/plain +Plain Text -xls -application/vnd.ms-excel -Microsoft Office Spreadsheet +xls +application/vnd.ms-excel +Microsoft Office Spreadsheet -xlsx -application/vnd.openxmlformats-officedocument.spreadsheetml.sheet -Microsoft Office Spreadsheet +xlsx +application/vnd.openxmlformats-officedocument.spreadsheetml.sheet +Microsoft Office Spreadsheet -zip -application/zip -A ZIP file of HTML, Images CSS +zip +application/zip +A ZIP file of HTML, Images CSS @@ -1694,7 +1817,7 @@ Choose a number from below, or type in your own value \ "sa-east-1" location_constraint> 1 Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" @@ -1756,6 +1879,8 @@ y/e/d> y
    rclone ls remote:bucket

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    rclone sync /home/local/directory remote:bucket
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Modified time

    The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    Multipart uploads

    @@ -1777,11 +1902,49 @@ y/e/d> y
  • Running rclone on an EC2 instance with an IAM role
  • If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).

    +

    S3 Permissions

    +

    When using the sync subcommand of rclone the following minimum permissions are required to be available on the bucket being written to:

    + +

    Example policy:

    +
    {
    +    "Version": "2012-10-17",
    +    "Statement": [
    +        {
    +            "Effect": "Allow",
    +            "Principal": {
    +                "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
    +            },
    +            "Action": [
    +                "s3:ListBucket",
    +                "s3:DeleteObject",
    +                "s3:GetObject",
    +                "s3:PutObject",
    +                "s3:PutObjectAcl"
    +            ],
    +            "Resource": [
    +              "arn:aws:s3:::BUCKET_NAME/*",
    +              "arn:aws:s3:::BUCKET_NAME"
    +            ]
    +        }
    +    ]
    +}
    +

    Notes on above:

    +
      +
    1. This is a policy that can be used when creating bucket. It assumes that USER_NAME has been created.
    2. +
    3. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.
    4. +
    +

    For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync.

    Specific options

    Here are the command line options specific to this cloud storage system.

    --s3-acl=STRING

    Canned ACL used when creating buckets and/or storing objects in S3.

    -

    For more info visit the canned ACL docs.

    +

    For more info visit the canned ACL docs.

    --s3-storage-class=STRING

    Storage class to upload new objects with.

    Available options include:

    @@ -1882,10 +2045,10 @@ server_side_encryption =

    So once set up, for example to copy files into a bucket

    rclone --size-only copy /path/to/files minio:bucket

    Swift

    -

    Swift refers to Openstack Object Storage. Commercial implementations of that being:

    +

    Swift refers to Openstack Object Storage. Commercial implementations of that being:

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

    Here is an example of making a swift configuration. First run

    @@ -2001,6 +2164,8 @@ key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAME

    Note that you may (or may not) need to set region too - try without first.

    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Specific options

    Here are the command line options specific to this cloud storage system.

    --swift-chunk-size=SIZE

    @@ -2083,15 +2248,10 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to a dropbox directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and MD5SUMs

    -

    Dropbox doesn't provide the ability to set modification times in the V1 public API, so rclone can't support modified time with Dropbox.

    -

    This may change in the future - see these issues for details:

    - -

    Dropbox doesn't return any sort of checksum (MD5 or SHA1).

    -

    Together that means that syncs to dropbox will effectively have the --size-only flag set.

    +

    Modified time and Hashes

    +

    Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

    +

    This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only or --checksum flag to stop it.

    +

    Dropbox supports its own hash type which is checked for all transfers.

    Specific options

    Here are the command line options specific to this cloud storage system.

    --dropbox-chunk-size=SIZE

    @@ -2099,7 +2259,7 @@ y/e/d> y

    Limitations

    Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.

    -

    If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbix:dir followed by an rclone rmdir dropbox:dir.

    +

    If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

    Google Cloud Storage

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

    @@ -2215,19 +2375,27 @@ y/e/d> y

    You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

    To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

    To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.

    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Modified time

    Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.

    Amazon Drive

    Paths are specified as remote:path

    Paths may be as deep as required, eg remote:directory/subdirectory.

    The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.

    +

    The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.

    +

    NB rclone doesn't not currently have its own Amazon Drive credentials (see the forum for why) so you will either need to have your own client_id and client_secret with Amazon Drive, or use a a third party ouath proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.

    +

    Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    -
    n) New remote
    -d) Delete remote
    +
    No remotes found - make a new one
    +n) New remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
     q) Quit config
    -e/n/d/q> n
    +n/r/c/s/q> n
     name> remote
     Type of storage to configure.
     Choose a number from below, or type in your own value
    @@ -2241,28 +2409,35 @@ Choose a number from below, or type in your own value
        \ "dropbox"
      5 / Encrypt/Decrypt a remote
        \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    + 6 / FTP Connection
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
        \ "google cloud storage"
    - 7 / Google Drive
    + 8 / Google Drive
        \ "drive"
    - 8 / Hubic
    + 9 / Hubic
        \ "hubic"
    - 9 / Local Disk
    +10 / Local Disk
        \ "local"
    -10 / Microsoft OneDrive
    +11 / Microsoft OneDrive
        \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
        \ "swift"
    -12 / SSH/SFTP Connection
    +13 / SSH/SFTP Connection
        \ "sftp"
    -13 / Yandex Disk
    +14 / Yandex Disk
        \ "yandex"
     Storage> 1
    -Amazon Application Client Id - leave blank normally.
    -client_id>
    -Amazon Application Client Secret - leave blank normally.
    -client_secret>
    +Amazon Application Client Id - required.
    +client_id> your client ID goes here
    +Amazon Application Client Secret - required.
    +client_secret> your client secret goes here
    +Auth server URL - leave blank to use Amazon's.
    +auth_url> Optional auth URL
    +Token server url - leave blank to use Amazon's.
    +token_url> Optional token URL
     Remote config
    +Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
     Use auto config?
      * Say Y if not sure
      * Say N if you are working on a remote or headless machine
    @@ -2275,15 +2450,17 @@ Waiting for code...
     Got code
     --------------------
     [remote]
    -client_id =
    -client_secret =
    +client_id = your client ID goes here
    +client_secret = your client secret goes here
    +auth_url = Optional auth URL
    +token_url = Optional token URL
     token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
     --------------------
     y) Yes this is OK
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    Once configured you can then use rclone like this,

    List directories in top level of your Amazon Drive

    @@ -2292,7 +2469,7 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an Amazon Drive directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and MD5SUMs

    +

    Modified time and MD5SUMs

    Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.

    It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.

    Deleting files

    @@ -2315,7 +2492,7 @@ y/e/d> y

    Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.

    Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

    At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.

    -

    Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

    +

    Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

    Microsoft OneDrive

    Paths are specified as remote:path

    Paths may be as deep as required, eg remote:directory/subdirectory.

    @@ -2382,7 +2559,7 @@ y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    Once configured you can then use rclone like this,

    List directories in top level of your OneDrive

    @@ -2391,7 +2568,7 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an OneDrive directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    One drive supports SHA1 type hashes, so you can use --checksum flag.

    Deleting files

    @@ -2472,7 +2649,7 @@ y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    Once configured you can then use rclone like this,

    List containers in the top level of your Hubic

    @@ -2483,6 +2660,8 @@ y/e/d> y
    rclone copy /home/source remote:backup

    If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default directory

    rclone copy /home/source remote:default/backup
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Modified time

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    @@ -2556,6 +2735,8 @@ y/e/d> y
    rclone ls remote:bucket

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    rclone sync /home/local/directory remote:bucket
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Modified time

    The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.

    Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.

    @@ -2642,7 +2823,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test

    Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.

    Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them.

    Yandex Disk

    -

    Yandex Disk is a cloud storage solution created by Yandex.

    +

    Yandex Disk is a cloud storage solution created by Yandex.

    Yandex paths may be as deep as required, eg remote:directory/subdirectory.

    Here is an example of making a yandex configuration. First run

    rclone config
    @@ -2706,7 +2887,7 @@ y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    Once configured you can then use rclone like this,

    See top level directories

    @@ -2717,6 +2898,8 @@ y/e/d> y
    rclone ls remote:directory

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    rclone sync /home/local/directory remote:directory
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Modified time

    Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

    MD5 checksums

    @@ -2748,23 +2931,25 @@ Choose a number from below, or type in your own value \ "dropbox" 5 / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive + 8 / Google Drive \ "drive" - 8 / Hubic + 9 / Hubic \ "hubic" - 9 / Local Disk +10 / Local Disk \ "local" -10 / Microsoft OneDrive +11 / Microsoft OneDrive \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -12 / SSH/SFTP Connection +13 / SSH/SFTP Connection \ "sftp" -13 / Yandex Disk +14 / Yandex Disk \ "yandex" -Storage> 12 +Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com @@ -2772,7 +2957,7 @@ Choose a number from below, or type in your own value host> example.com SSH username, leave blank for current username, ncw user> -SSH port +SSH port, leave blank to use default (22) port> SSH password, leave blank to use ssh-agent y) Yes type in my own password @@ -2805,6 +2990,7 @@ y/e/d> y

    Modified times are used in syncing and are fully supported.

    Limitations

    SFTP does not support any checksums.

    +

    The only ssh agent supported under Windows is Putty's pagent.

    SFTP isn't supported under plan9 until this issue is fixed.

    Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    Note that --timeout isn't supported (but --contimeout is).

    @@ -2858,6 +3044,8 @@ Choose a number from below, or type in your own value \ "off" 2 / Encrypt the filenames see the docs for the details. \ "standard" + 3 / Very simple filename obfuscation. + \ "obfuscate" filename_encryption> 2 Password or pass phrase for encryption. y) Yes type in my own password @@ -2960,9 +3148,20 @@ $ rclone -q ls secret:
  • identical files names will have identical uploaded names
  • can use shortcuts to shorten the directory recursion
  • +

    Obfuscation

    +

    This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called "hello" may become "53.jgnnq"

    +

    This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it's an intermediate between "off" and "standard". The advantage is that it allows for longer path segment names.

    +

    There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. You can not rely on this for strong protection.

    +

    Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.

    There may be an even more secure file name encryption mode in the future which will address the long file name problem.

    -

    Modified time and hashes

    +

    Modified time and hashes

    Crypt stores modification times using the underlying remote so support depends on that.

    Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

    Note that you should use the rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can't check the checksums properly.

    @@ -3035,12 +3234,102 @@ $ rclone -q ls secret:

    Key derivation

    Rclone uses scrypt with parameters N=16384, r=8, p=1 with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.

    scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.

    +

    FTP

    +

    FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.

    +

    Here is an example of making an FTP configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous as username and your email address as the password.

    +
    No remotes found - make a new one
    +n) New remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +n/r/c/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / FTP Connection 
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 8 / Google Drive
    +   \ "drive"
    + 9 / Hubic
    +   \ "hubic"
    +10 / Local Disk
    +   \ "local"
    +11 / Microsoft OneDrive
    +   \ "onedrive"
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +13 / SSH/SFTP Connection
    +   \ "sftp"
    +14 / Yandex Disk
    +   \ "yandex"
    +Storage> ftp
    +FTP host to connect to
    +Choose a number from below, or type in your own value
    + 1 / Connect to ftp.example.com
    +   \ "ftp.example.com"
    +host> ftp.example.com
    +FTP username, leave blank for current username, ncw
    +user>
    +FTP port, leave blank to use default (21)
    +port>
    +FTP password
    +y) Yes type in my own password
    +g) Generate random password
    +y/g> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Remote config
    +--------------------
    +[remote]
    +host = ftp.example.com
    +user = 
    +port =
    +pass = *** ENCRYPTED ***
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this

    +

    See all directories in the home directory

    +
    rclone lsd remote:
    +

    Make a new directory

    +
    rclone mkdir remote:path/to/directory
    +

    List the contents of a directory

    +
    rclone ls remote:path/to/directory
    +

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    +
    rclone sync /home/local/directory remote:directory
    +

    Modified time

    +

    FTP does not support modified times. Any times you see on the server will be time of upload.

    +

    Checksums

    +

    FTP does not support any checksums.

    +

    Limitations

    +

    Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    +

    Note that --timeout isn't supported (but --contimeout is).

    +

    FTP could support server side move but doesn't yet.

    Local Filesystem

    Local paths are specified as normal filesystem paths, eg /path/to/wherever, so

    rclone sync /home/source /tmp/destination

    Will sync /home/source to /tmp/destination

    These can be configured into the config file for consistencies sake, but it is probably easier not to.

    -

    Modified time

    +

    Modified time

    Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

    Filenames

    Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.

    @@ -3085,6 +3374,10 @@ nounc = true 6 two/three 6 b/two 6 b/one +

    --no-local-unicode-normalization

    +

    By default rclone normalizes (NFC) the unicode representation of filenames and directories. This flag disables that normalization and uses the same representation as the local filesystem.

    +

    This can be useful if you need to retain the local unicode representation and you are using a cloud provider which supports unnormalized names (e.g. S3 or ACD).

    +

    This should also work with any provider if you are using crypt and have file name encryption (the default) or obfuscation turned on.

    --one-file-system, -x

    This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.

    For example if you have a directory heirachy like this

    @@ -3557,7 +3850,7 @@ nounc = true
  • Upload releases to github too
  • Swift
  • Fix sync for chunked files
  • -
  • One Drive
  • +
  • OneDrive
  • Re-enable server side copy
  • Don't mask HTTP error codes with JSON decode error
  • S3
  • @@ -3579,13 +3872,13 @@ nounc = true
  • This could have caused data loss for files > 5GB in size
  • Use ContentType from Object to avoid lookups in listings
  • -
  • One Drive
  • +
  • OneDrive
  • disable server side copy as it seems to be broken at Microsoft
  • v1.24 - 2015-11-07

    Email

    -

    Or if all else fails or you want to ask something private or confidential email

    +

    Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood

    diff --git a/MANUAL.md b/MANUAL.md index 278eb8ae3..6e168147f 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,11 +1,11 @@ % rclone(1) User Manual % Nick Craig-Wood -% Mar 18, 2017 +% Jun 15, 2017 Rclone ====== -[![Logo](http://rclone.org/img/rclone-120x120.png)](http://rclone.org/) +[![Logo](https://rclone.org/img/rclone-120x120.png)](https://rclone.org/) Rclone is a command line program to sync files and directories to and from @@ -15,11 +15,12 @@ Rclone is a command line program to sync files and directories to and from * Dropbox * Google Cloud Storage * Amazon Drive - * Microsoft One Drive + * Microsoft OneDrive * Hubic * Backblaze B2 * Yandex Disk * SFTP + * FTP * The local filesystem Features @@ -27,20 +28,20 @@ Features * MD5/SHA1 hashes checked at all times for file integrity * Timestamps preserved on files * Partial syncs supported on a whole file basis - * [Copy](http://rclone.org/commands/rclone_copy/) mode to just copy new/changed files - * [Sync](http://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical - * [Check](http://rclone.org/commands/rclone_check/) mode to check for file hash equality + * [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files + * [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical + * [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality * Can sync to and from network, eg two different cloud accounts - * Optional encryption ([Crypt](http://rclone.org/crypt/)) - * Optional FUSE mount ([rclone mount](http://rclone.org/commands/rclone_mount/)) + * Optional encryption ([Crypt](https://rclone.org/crypt/)) + * Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/)) Links - * [Home page](http://rclone.org/) - * [Github project page for source and bug tracker](http://github.com/ncw/rclone) + * [Home page](https://rclone.org/) + * [Github project page for source and bug tracker](https://github.com/ncw/rclone) * [Rclone Forum](https://forum.rclone.org) * Google+ page - * [Downloads](http://rclone.org/downloads/) + * [Downloads](https://rclone.org/downloads/) # Install # @@ -48,20 +49,20 @@ Rclone is a Go program and comes as a single binary file. ## Quickstart ## - * [Download](http://rclone.org/downloads/) the relevant binary. + * [Download](https://rclone.org/downloads/) the relevant binary. * Unpack and the `rclone` binary. - * Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details. + * Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. See below for some expanded Linux / macOS instructions. -See the [Usage section](http://rclone.org/docs/) of the docs for how to use rclone, or +See the [Usage section](https://rclone.org/docs/) of the docs for how to use rclone, or run `rclone -h`. ## Linux installation from precompiled binary ## Fetch and unpack - curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip + curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip unzip rclone-current-linux-amd64.zip cd rclone-*-linux-amd64 @@ -77,7 +78,7 @@ Install manpage sudo cp rclone.1 /usr/local/share/man/man1/ sudo mandb -Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details. +Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. rclone config @@ -85,7 +86,7 @@ Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) Download the latest version of rclone. - cd && curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip + cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip Unzip the download and cd to the extracted folder. @@ -99,13 +100,13 @@ Remove the leftover files. cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip -Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details. +Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. rclone config ## Install from source ## -Make sure you have at least [Go](https://golang.org/) 1.5 installed. +Make sure you have at least [Go](https://golang.org/) 1.6 installed. Make sure your `GOPATH` is set, then: go get -u -v github.com/ncw/rclone @@ -138,7 +139,7 @@ Instructions * install Snapd on your distro using the instructions below * sudo snap install rclone --classic - * Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details. + * Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. See below for how to install snapd if it isn't already installed @@ -179,7 +180,7 @@ Install the [snap meta layer](https://github.com/morphis/meta-snappy/blob/master #### openSUSE #### - sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy + sudo zypper addrepo https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy sudo zypper install snapd #### OpenWrt #### @@ -201,19 +202,20 @@ option: See the following for detailed instructions for - * [Google drive](http://rclone.org/drive/) - * [Amazon S3](http://rclone.org/s3/) - * [Swift / Rackspace Cloudfiles / Memset Memstore](http://rclone.org/swift/) - * [Dropbox](http://rclone.org/dropbox/) - * [Google Cloud Storage](http://rclone.org/googlecloudstorage/) - * [Local filesystem](http://rclone.org/local/) - * [Amazon Drive](http://rclone.org/amazonclouddrive/) - * [Backblaze B2](http://rclone.org/b2/) - * [Hubic](http://rclone.org/hubic/) - * [Microsoft One Drive](http://rclone.org/onedrive/) - * [Yandex Disk](http://rclone.org/yandex/) - * [SFTP](http://rclone.org/sftp/) - * [Crypt](http://rclone.org/crypt/) - to encrypt other remotes + * [Google drive](https://rclone.org/drive/) + * [Amazon S3](https://rclone.org/s3/) + * [Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/) + * [Dropbox](https://rclone.org/dropbox/) + * [Google Cloud Storage](https://rclone.org/googlecloudstorage/) + * [Local filesystem](https://rclone.org/local/) + * [Amazon Drive](https://rclone.org/amazonclouddrive/) + * [Backblaze B2](https://rclone.org/b2/) + * [Hubic](https://rclone.org/hubic/) + * [Microsoft OneDrive](https://rclone.org/onedrive/) + * [Yandex Disk](https://rclone.org/yandex/) + * [SFTP](https://rclone.org/sftp/) + * [FTP](https://rclone.org/ftp/) + * [Crypt](https://rclone.org/crypt/) - to encrypt other remotes Usage ----- @@ -463,7 +465,7 @@ to check all the data. ``` -rclone check source:path dest:path +rclone check source:path dest:path [flags] ``` ### Options @@ -670,7 +672,7 @@ Or ``` -rclone dedupe [mode] remote:path +rclone dedupe [mode] remote:path [flags] ``` ### Options @@ -724,7 +726,7 @@ Note that if offset is negative it will count from the end, so ``` -rclone cat remote:path +rclone cat remote:path [flags] ``` ### Options @@ -812,6 +814,24 @@ After it has run it will log the status of the encryptedremote:. rclone cryptcheck remote:path cryptedremote:path ``` +## rclone dbhashsum + +Produces a Dropbbox hash file for all the objects in the path. + +### Synopsis + + + +Produces a Dropbox hash file for all the objects in the path. The +hashes are calculated according to [Dropbox content hash +rules](https://www.dropbox.com/developers/reference/content-hash). +The output is in the same format as md5sum and sha1sum. + + +``` +rclone dbhashsum remote:path +``` + ## rclone genautocomplete Output bash completion script for rclone. @@ -853,7 +873,13 @@ supplied. These are in a format suitable for hugo to render into the rclone.org website. ``` -rclone gendocs output_directory +rclone gendocs output_directory [flags] +``` + +### Options + +``` + -h, --help help for gendocs ``` ## rclone listremotes @@ -870,7 +896,7 @@ When uses with the -l flag it lists the types too. ``` -rclone listremotes +rclone listremotes [flags] ``` ### Options @@ -879,6 +905,52 @@ rclone listremotes -l, --long Show the type as well as names. ``` +## rclone lsjson + +List directories and objects in the path in JSON format. + +### Synopsis + + +List directories and objects in the path in JSON format. + +The output is an array of Items, where each Item looks like this + + { + "Hashes" : { + "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", + "MD5" : "b1946ac92492d2347c6235b4d2611184", + "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" + }, + "IsDir" : false, + "ModTime" : "2017-05-31T16:15:57.034468261+01:00", + "Name" : "file.txt", + "Path" : "full/path/goes/here/file.txt", + "Size" : 6 + } + +If --hash is not specified the the Hashes property won't be emitted. + +If --no-modtime is specified then ModTime will be blank. + +The time is in RFC3339 format with nanosecond precision. + +The whole output can be processed as a JSON blob, or alternatively it +can be processed line by line as each item is written one to a line. + + +``` +rclone lsjson remote:path [flags] +``` + +### Options + +``` + --hash Include hashes in the output (may take longer). + --no-modtime Don't read the modification time (can speed things up). + -R, --recursive Recurse into the listing. +``` + ## rclone mount Mount the remote as a mountpoint. **EXPERIMENTAL** @@ -894,20 +966,19 @@ This is **EXPERIMENTAL** - use with care. First set up your remote using `rclone config`. Check it works with `rclone ls` etc. -Start the mount like this (note the & on the end to put rclone in the background). +Start the mount like this - rclone mount remote:path/to/files /path/to/local/mount & + rclone mount remote:path/to/files /path/to/local/mount -Stop the mount with +When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, +the mount is automatically stopped. +The umount operation can fail, for example when the mountpoint is busy. +When that happens, it is the user's responsibility to stop the mount manually with + + # Linux fusermount -u /path/to/local/mount - -Or if that fails try - - fusermount -z -u /path/to/local/mount - -Or with OS X - + # OS X umount /path/to/local/mount ### Limitations ### @@ -940,6 +1011,21 @@ mount won't do that, so will be less reliable than the rclone command. Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. +### Directory Cache ### + +Using the `--dir-cache-time` flag, you can set how long a +directory should be considered up to date and not refreshed from the +backend. Changes made locally in the mount may appear immediately or +invalidate the cache. However, changes done on the remote will only +be picked up once the cache expires. + +Alternatively, you can send a `SIGHUP` signal to rclone for +it to flush all directory caches, regardless of how old they are. +Assuming only one rclone instance is running, you can reset the cache +like this: + + kill -SIGHUP $(pidof rclone) + ### Bugs ### * All the remotes should work for read, but some may not for write @@ -947,15 +1033,9 @@ files to be visible in the mount. * maybe should pass in size as -1 to mean work it out * Or put in an an upload cache to cache the files on disk first -### TODO ### - - * Check hashes on upload/download - * Preserve timestamps - * Move directories - ``` -rclone mount remote:path /path/to/mountpoint +rclone mount remote:path /path/to/mountpoint [flags] ``` ### Options @@ -969,8 +1049,10 @@ rclone mount remote:path /path/to/mountpoint --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --gid uint32 Override the gid field set by the filesystem. (default 502) --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) - --no-modtime Don't read the modification time (can speed things up). + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 502) --umask int Override the permission bits set by the filesystem. (default 2) @@ -1019,6 +1101,43 @@ transfer. rclone moveto source:path dest:path ``` +## rclone ncdu + +Explore a remote with a text based user interface. + +### Synopsis + + + +This displays a text based user interface allowing the navigation of a +remote. It is most useful for answering the question - "What is using +all my disk space?". + +To make the user interface it first scans the entire remote given and +builds an in memory representation. rclone ncdu can be used during +this scanning phase and you will see it building up the directory +structure as it goes along. + +Here are the keys - press '?' to toggle the help on and off + + ↑,↓ or k,j to Move + →,l to enter + ←,h to return + c toggle counts + g toggle graph + n,s,C sort by name,size,count + ? to toggle help on and off + q/ESC/c-C to quit + +This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for +rclone remotes. It is missing lots of features at the moment, most +importantly deleting files, but is useful as it stands. + + +``` +rclone ncdu remote:path +``` + ## rclone obscure Obscure password for use in the rclone.conf @@ -1215,8 +1334,8 @@ At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited. -Bandwidth limits only apply to the data transfer. The don't apply to the -bandwith of the directory listings etc. +Bandwidth limits only apply to the data transfer. They don't apply to the +bandwidth of the directory listings etc. Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say @@ -1252,7 +1371,7 @@ and a more accurate sync is desired than just checking the file size. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the [overview -section](http://rclone.org/overview/). +section](https://rclone.org/overview/). Eg `rclone --checksum sync s3:/bucket swift:/bucket` would run much quicker than without the `--checksum` flag. @@ -1533,15 +1652,47 @@ Specifying `--delete-during` will delete files while checking and uploading files. This is the fastest option and uses the least memory. Specifying `--delete-after` (the default value) will delay deletion of -files until all new/updated files have been successfully transfered. +files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted -after the copy pass has completed sucessfully. The files to be +after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message `not deleting files as there were IO errors`. +### --fast-list ### + +When doing anything which involves a directory listing (eg `sync`, +`copy`, `ls` - in fact nearly every command), rclone normally lists a +directory and processes it before using more directory lists to +process any subdirectories. This can be parallelised and works very +quickly using the least amount of memory. + +However some remotes have a way of listing all files beneath a +directory in one (or a small number) of transactions. These tend to +be the bucket based remotes (eg s3, b2, gcs, swift, hubic). + +If you use the `--fast-list` flag then rclone will use this method for +listing directories. This will have the following consequences for +the listing: + + * It **will** use fewer transactions (important if you pay for them) + * It **will** use more memory. Rclone has to load the whole listing into memory. + * It *may* be faster because it uses fewer transactions + * It *may* be slower because it can't be parallelized + +rclone should always give identical results with and without +`--fast-list`. + +If you pay for transactions and can fit your entire sync listing into +memory then `--fast-list` is recommended. If you have a very big sync +to do then don't use `--fast-list` otherwise you will run out of +memory. + +If you use `--fast-list` on a remote which doesn't support it, then +rclone will just ignore it. + ### --timeout=TIME ### This sets the IO idle timeout. If a transfer has started but then @@ -1568,7 +1719,7 @@ updated if the sizes are different. On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these -remoes, rclone will skip any files which exist on the destination and +remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. @@ -1666,7 +1817,7 @@ export RCLONE_CONFIG_PASS Then source the file when you want to use it. From the shell you would do `source set-rclone-password`. It will then ask you for the -password and set it in the envonment variable. +password and set it in the environment variable. If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter @@ -1739,7 +1890,7 @@ If you are only copying a small number of files and/or have a large number of files on the destination then `--no-traverse` will stop rclone listing the destination and save time. -However if you are copying a large number of files, escpecially if you +However if you are copying a large number of files, especially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use `--no-traverse`. @@ -1767,7 +1918,7 @@ For the filtering options * `--max-age` * `--dump-filters` -See the [filtering section](http://rclone.org/filtering/). +See the [filtering section](https://rclone.org/filtering/). Logging ------- @@ -1814,7 +1965,7 @@ immediately before exiting. When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high -priority log message (visibile with `-q`) showing the message and +priority log message (visible with `-q`) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry @@ -2393,14 +2544,15 @@ Here is an overview of the major features of each cloud storage system. | Google Drive | MD5 | Yes | No | Yes | R/W | | Amazon S3 | MD5 | Yes | No | No | R/W | | Openstack Swift | MD5 | Yes | No | No | R/W | -| Dropbox | - | No | Yes | No | R | +| Dropbox | DBHASH †| Yes | Yes | No | - | | Google Cloud Storage | MD5 | Yes | No | No | R/W | | Amazon Drive | MD5 | No | Yes | No | R | -| Microsoft One Drive | SHA1 | Yes | Yes | No | R | +| Microsoft OneDrive | SHA1 | Yes | Yes | No | R | | Hubic | MD5 | Yes | No | No | R/W | | Backblaze B2 | SHA1 | Yes | No | No | R/W | | Yandex Disk | MD5 | Yes | No | No | R/W | | SFTP | - | Yes | Depends | No | - | +| FTP | - | No | Yes | No | - | | The local filesystem | All | Yes | Depends | No | - | ### Hash ### @@ -2413,6 +2565,10 @@ the `check` command. To use the checksum checks between filesystems they must support a common hash type. +† Note that Dropbox supports [its own custom +hash](https://www.dropbox.com/developers/reference/content-hash). +This is an SHA256 sum of all the 4MB block SHA256s. + ### ModTime ### The cloud storage system supports setting modification times on @@ -2476,20 +2632,21 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. -| Name | Purge | Copy | Move | DirMove | CleanUp | -| ---------------------- |:-----:|:----:|:----:|:-------:|:-------:| -| Google Drive | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | -| Amazon S3 | No | Yes | No | No | No | -| Openstack Swift | Yes † | Yes | No | No | No | -| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | -| Google Cloud Storage | Yes | Yes | No | No | No | -| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | -| Microsoft One Drive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | -| Hubic | Yes † | Yes | No | No | No | -| Backblaze B2 | No | No | No | No | Yes | -| Yandex Disk | Yes | No | No | No | No [#575](https://github.com/ncw/rclone/issues/575) | -| SFTP | No | No | Yes | Yes | No | -| The local filesystem | Yes | No | Yes | Yes | No | +| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | +| ---------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:| +| Google Drive | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | +| Amazon S3 | No | Yes | No | No | No | Yes | +| Openstack Swift | Yes † | Yes | No | No | No | Yes | +| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | +| Google Cloud Storage | Yes | Yes | No | No | No | Yes | +| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | +| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | +| Hubic | Yes † | Yes | No | No | No | Yes | +| Backblaze B2 | No | No | No | No | Yes | Yes | +| Yandex Disk | Yes | No | No | No | No [#575](https://github.com/ncw/rclone/issues/575) | Yes | +| SFTP | No | No | Yes | Yes | No | No | +| FTP | No | No | Yes | Yes | No | No | +| The local filesystem | Yes | No | Yes | Yes | No | No | ### Purge ### @@ -2534,6 +2691,12 @@ This is used for emptying the trash for a remote by `rclone cleanup`. If the server can't do `CleanUp` then `rclone cleanup` will return an error. +### ListR ### + +The remote supports a recursive list to list all the contents beneath +a directory quickly. This enables the `--fast-list` flag to work. +See the [rclone docs](/docs/#fast-list) for more details. + Google Drive ----------------------------------------- @@ -2552,10 +2715,13 @@ Here is an example of how to make a remote called `remote`. First run: This will guide you through an interactive setup process: ``` +No remotes found - make a new one n) New remote -d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password q) Quit config -e/n/d/q> n +n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value @@ -2569,27 +2735,29 @@ Choose a number from below, or type in your own value \ "dropbox" 5 / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive + 8 / Google Drive \ "drive" - 8 / Hubic + 9 / Hubic \ "hubic" - 9 / Local Disk +10 / Local Disk \ "local" -10 / Microsoft OneDrive +11 / Microsoft OneDrive \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -12 / SSH/SFTP Connection +13 / SSH/SFTP Connection \ "sftp" -13 / Yandex Disk +14 / Yandex Disk \ "yandex" -Storage> 7 +Storage> 8 Google Application Client Id - leave blank normally. -client_id> +client_id> Google Application Client Secret - leave blank normally. -client_secret> +client_secret> Remote config Use auto config? * Say Y if not sure @@ -2601,10 +2769,14 @@ If your browser doesn't open automatically go to the following link: http://127. Log in and authorize rclone for access Waiting for code... Got code +Configure this as a team drive? +y) Yes +n) No +y/n> n -------------------- [remote] -client_id = -client_secret = +client_id = +client_secret = token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} -------------------- y) Yes this is OK @@ -2634,6 +2806,44 @@ To copy a local directory to a drive directory called backup rclone copy /home/source remote:backup +### Team drives ### + +If you want to configure the remote to point to a Google Team Drive +then answer `y` to the question `Configure this as a team drive?`. + +This will fetch the list of Team Drives from google and allow you to +configure which one you want to use. You can also type in a team +drive ID if you prefer. + +For example: + +``` +Configure this as a team drive? +y) Yes +n) No +y/n> y +Fetching team drive list... +Choose a number from below, or type in your own value + 1 / Rclone Test + \ "xxxxxxxxxxxxxxxxxxxx" + 2 / Rclone Test 2 + \ "yyyyyyyyyyyyyyyyyyyy" + 3 / Rclone Test 3 + \ "zzzzzzzzzzzzzzzzzzzz" +Enter a Team Drive ID> 1 +-------------------- +[remote] +client_id = +client_secret = +token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} +team_drive = xxxxxxxxxxxxxxxxxxxx +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + ### Modified time ### Google drive stores modification times accurate to 1 ms. @@ -2916,7 +3126,7 @@ Choose a number from below, or type in your own value \ "sa-east-1" location_constraint> 1 Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" @@ -2990,6 +3200,12 @@ files in the bucket. rclone sync /home/local/directory remote:bucket +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + ### Modified time ### The modified time is stored as metadata on the object as @@ -3025,6 +3241,54 @@ credentials. In order of precedence: If none of these option actually end up providing `rclone` with AWS credentials then S3 interaction will be non-authenticated (see below). +### S3 Permissions ### + +When using the `sync` subcommand of `rclone` the following minimum +permissions are required to be available on the bucket being written to: + +* `ListBucket` +* `DeleteObject` +* `GetObject` +* `PutObject` +* `PutObjectACL` + +Example policy: + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" + }, + "Action": [ + "s3:ListBucket", + "s3:DeleteObject", + "s3:GetObject", + "s3:PutObject", + "s3:PutObjectAcl" + ], + "Resource": [ + "arn:aws:s3:::BUCKET_NAME/*", + "arn:aws:s3:::BUCKET_NAME" + ] + } + ] +} +``` + +Notes on above: + +1. This is a policy that can be used when creating bucket. It assumes + that `USER_NAME` has been created. +2. The Resource entry must include both resource ARNs, as one implies + the bucket and the other implies the bucket's objects. + +For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) +that will generate one or more buckets that will work with `rclone sync`. + ### Specific options ### Here are the command line options specific to this cloud storage @@ -3034,7 +3298,7 @@ system. Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit the [canned ACL docs](http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). +For more info visit the [canned ACL docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). #### --s3-storage-class=STRING #### @@ -3190,11 +3454,11 @@ So once set up, for example to copy files into a bucket Swift ---------------------------------------- -Swift refers to [Openstack Object Storage](http://www.openstack.org/software/openstack-storage/). +Swift refers to [Openstack Object Storage](https://www.openstack.org/software/openstack-storage/). Commercial implementations of that being: - * [Rackspace Cloud Files](http://www.rackspace.com/cloud/files/) - * [Memset Memstore](http://www.memset.com/cloud/storage/) + * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) + * [Memset Memstore](https://www.memset.com/cloud/storage/) Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. @@ -3341,6 +3605,12 @@ tenant = $OS_TENANT_NAME Note that you may (or may not) need to set `region` too - try without first. +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + ### Specific options ### Here are the command line options specific to this cloud storage @@ -3472,20 +3742,20 @@ To copy a local directory to a dropbox directory called backup rclone copy /home/source remote:backup -### Modified time and MD5SUMs ### +### Modified time and Hashes ### -Dropbox doesn't provide the ability to set modification times in the -V1 public API, so rclone can't support modified time with Dropbox. +Dropbox supports modified times, but the only way to set a +modification time is to re-upload the file. -This may change in the future - see these issues for details: +This means that if you uploaded your data with an older version of +rclone which didn't support the v2 API and modified times, rclone will +decide to upload all your old data to fix the modification times. If +you don't want this to happen use `--size-only` or `--checksum` flag +to stop it. - * [Dropbox V2 API](https://github.com/ncw/rclone/issues/349) - * [Allow syncs for remotes that can't set modtime on existing objects](https://github.com/ncw/rclone/issues/348) - -Dropbox doesn't return any sort of checksum (MD5 or SHA1). - -Together that means that syncs to dropbox will effectively have the -`--size-only` flag set. +Dropbox supports [its own hash +type](https://www.dropbox.com/developers/reference/content-hash) which +is checked for all transfers. ### Specific options ### @@ -3511,7 +3781,7 @@ attempt to upload one of those file names, but the sync won't fail. If you have more than 10,000 files in a directory then `rclone purge dropbox:dir` will return the error `Failed to purge: There are too many files involved in this operation`. As a work-around do an -`rclone delete dropbix:dir` followed by an `rclone rmdir dropbox:dir`. +`rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`. Google Cloud Storage ------------------------------------------------- @@ -3677,6 +3947,12 @@ to your Service Account credentials at the `service_account_file` prompt and rclone won't use the browser based authentication flow. +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + ### Modified time ### Google google cloud storage stores md5sums natively and rclone stores @@ -3694,6 +3970,26 @@ The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. `rclone config` walks you through it. +The configuration process for Amazon Drive may involve using an [oauth +proxy](https://github.com/ncw/oauthproxy). This is used to keep the +Amazon credentials out of the source code. The proxy runs in Google's +very secure App Engine environment and doesn't store any credentials +which pass through it. + +**NB** rclone doesn't not currently have its own Amazon Drive +credentials (see [the +forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/) +for why) so you will either need to have your own `client_id` and +`client_secret` with Amazon Drive, or use a a third party ouath proxy +in which case you will need to enter `client_id`, `client_secret`, +`auth_url` and `token_url`. + +Note also if you are not using Amazon's `auth_url` and `token_url`, +(ie you filled in something for those) then if setting up on a remote +machine you can only use the [copying the config method of +configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) +- `rclone authorize` will not work. + Here is an example of how to make a remote called `remote`. First run: rclone config @@ -3701,10 +3997,13 @@ Here is an example of how to make a remote called `remote`. First run: This will guide you through an interactive setup process: ``` +No remotes found - make a new one n) New remote -d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password q) Quit config -e/n/d/q> n +n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value @@ -3718,28 +4017,35 @@ Choose a number from below, or type in your own value \ "dropbox" 5 / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive + 8 / Google Drive \ "drive" - 8 / Hubic + 9 / Hubic \ "hubic" - 9 / Local Disk +10 / Local Disk \ "local" -10 / Microsoft OneDrive +11 / Microsoft OneDrive \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -12 / SSH/SFTP Connection +13 / SSH/SFTP Connection \ "sftp" -13 / Yandex Disk +14 / Yandex Disk \ "yandex" Storage> 1 -Amazon Application Client Id - leave blank normally. -client_id> -Amazon Application Client Secret - leave blank normally. -client_secret> +Amazon Application Client Id - required. +client_id> your client ID goes here +Amazon Application Client Secret - required. +client_secret> your client secret goes here +Auth server URL - leave blank to use Amazon's. +auth_url> Optional auth URL +Token server url - leave blank to use Amazon's. +token_url> Optional token URL Remote config +Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine @@ -3752,8 +4058,10 @@ Waiting for code... Got code -------------------- [remote] -client_id = -client_secret = +client_id = your client ID goes here +client_secret = your client secret goes here +auth_url = Optional auth URL +token_url = Optional token URL token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK @@ -3762,7 +4070,7 @@ d) Delete this remote y/e/d> y ``` -See the [remote setup docs](http://rclone.org/remote_setup/) for how to set it up on a +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the @@ -3862,7 +4170,7 @@ larger than this will fail. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail. -Unfortunatly there is no way for rclone to see that this failure is +Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use `--max-size 50000M` option to limit the maximum size of uploaded files. Note that `--max-size` does not split @@ -3947,7 +4255,7 @@ d) Delete this remote y/e/d> y ``` -See the [remote setup docs](http://rclone.org/remote_setup/) for how to set it up on a +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the @@ -4096,7 +4404,7 @@ d) Delete this remote y/e/d> y ``` -See the [remote setup docs](http://rclone.org/remote_setup/) for how to set it up on a +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the @@ -4124,6 +4432,12 @@ browser*, you need to copy your files to the `default` directory rclone copy /home/source remote:default/backup +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + ### Modified time ### The modified time is stored as metadata on the object as @@ -4235,6 +4549,12 @@ excess files in the bucket. rclone sync /home/local/directory remote:bucket +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + ### Modified time ### The modified time is stored as metadata on the object as @@ -4443,7 +4763,7 @@ permitted, so you can't upload files or delete them. Yandex Disk ---------------------------------------- -[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](http://yandex.com). +[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com). Yandex paths may be as deep as required, eg `remote:directory/subdirectory`. @@ -4515,7 +4835,7 @@ d) Delete this remote y/e/d> y ``` -See the [remote setup docs](http://rclone.org/remote_setup/) for how to set it up on a +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the @@ -4543,6 +4863,12 @@ excess files in the path. rclone sync /home/local/directory remote:directory +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + ### Modified time ### Modified times are supported and are stored accurate to 1 ns in custom @@ -4593,23 +4919,25 @@ Choose a number from below, or type in your own value \ "dropbox" 5 / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive + 8 / Google Drive \ "drive" - 8 / Hubic + 9 / Hubic \ "hubic" - 9 / Local Disk +10 / Local Disk \ "local" -10 / Microsoft OneDrive +11 / Microsoft OneDrive \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -12 / SSH/SFTP Connection +13 / SSH/SFTP Connection \ "sftp" -13 / Yandex Disk +14 / Yandex Disk \ "yandex" -Storage> 12 +Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com @@ -4617,7 +4945,7 @@ Choose a number from below, or type in your own value host> example.com SSH username, leave blank for current username, ncw user> -SSH port +SSH port, leave blank to use default (22) port> SSH password, leave blank to use ssh-agent y) Yes type in my own password @@ -4667,6 +4995,8 @@ Modified times are used in syncing and are fully supported. SFTP does not support any checksums. +The only ssh agent supported under Windows is Putty's pagent. + SFTP isn't supported under plan9 until [this issue](https://github.com/pkg/sftp/issues/156) is fixed. @@ -4742,6 +5072,8 @@ Choose a number from below, or type in your own value \ "off" 2 / Encrypt the filenames see the docs for the details. \ "standard" + 3 / Very simple filename obfuscation. + \ "obfuscate" filename_encryption> 2 Password or pass phrase for encryption. y) Yes type in my own password @@ -4896,6 +5228,27 @@ Standard * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion +Obfuscation + +This is a simple "rotate" of the filename, with each file having a rot +distance based on the filename. We store the distance at the beginning +of the filename. So a file called "hello" may become "53.jgnnq" + +This is not a strong encryption of filenames, but it may stop automated +scanning tools from picking up on filename patterns. As such it's an +intermediate between "off" and "standard". The advantage is that it +allows for longer path segment names. + +There is a possibility with some unicode based filenames that the +obfuscation is weak and may map lower case characters to upper case +equivalents. You can not rely on this for strong protection. + + * file names very lightly obfuscated + * file names can be longer than standard encryption + * can use sub paths and copy single files + * directory structure visibile + * identical files names will have identical uploaded names + Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 @@ -5057,6 +5410,130 @@ then rclone uses an internal one. encrypted data. For full protection agains this you should always use a salt. +FTP +------------------------------ + +FTP is the File Transfer Protocol. FTP support is provided using the +[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp) +package. + +Here is an example of making an FTP configuration. First run + + rclone config + +This will guide you through an interactive setup process. An FTP remote only +needs a host together with and a username and a password. With anonymous FTP +server, you will need to use `anonymous` as username and your email address as +the password. + +``` +No remotes found - make a new one +n) New remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +n/r/c/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" +10 / Local Disk + \ "local" +11 / Microsoft OneDrive + \ "onedrive" +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +13 / SSH/SFTP Connection + \ "sftp" +14 / Yandex Disk + \ "yandex" +Storage> ftp +FTP host to connect to +Choose a number from below, or type in your own value + 1 / Connect to ftp.example.com + \ "ftp.example.com" +host> ftp.example.com +FTP username, leave blank for current username, ncw +user> +FTP port, leave blank to use default (21) +port> +FTP password +y) Yes type in my own password +g) Generate random password +y/g> y +Enter the password: +password: +Confirm the password: +password: +Remote config +-------------------- +[remote] +host = ftp.example.com +user = +port = +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This remote is called `remote` and can now be used like this + +See all directories in the home directory + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:path/to/directory + +List the contents of a directory + + rclone ls remote:path/to/directory + +Sync `/home/local/directory` to the remote directory, deleting any +excess files in the directory. + + rclone sync /home/local/directory remote:directory + +### Modified time ### + +FTP does not support modified times. Any times you see on the server +will be time of upload. + +### Checksums ### + +FTP does not support any checksums. + +### Limitations ### + +Note that since FTP isn't HTTP based the following flags don't work +with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` + +Note that `--timeout` isn't supported (but `--contimeout` is). + +FTP could support server side move but doesn't yet. + Local Filesystem ------------------------------------------- @@ -5173,6 +5650,18 @@ $ rclone -L ls /tmp/a 6 b/one ``` +#### --no-local-unicode-normalization #### + +By default rclone normalizes (NFC) the unicode representation of filenames and +directories. This flag disables that normalization and uses the same +representation as the local filesystem. + +This can be useful if you need to retain the local unicode representation and +you are using a cloud provider which supports unnormalized names (e.g. S3 or ACD). + +This should also work with any provider if you are using crypt and have file +name encryption (the default) or obfuscation turned on. + #### --one-file-system, -x #### This tells rclone to stay in the filesystem specified by the root and @@ -5601,7 +6090,7 @@ Changelog * Upload releases to github too * Swift * Fix sync for chunked files - * One Drive + * OneDrive * Re-enable server side copy * Don't mask HTTP error codes with JSON decode error * S3 @@ -5617,11 +6106,11 @@ Changelog * Stop SetModTime losing metadata (eg X-Object-Manifest) * This could have caused data loss for files > 5GB in size * Use ContentType from Object to avoid lookups in listings - * One Drive + * OneDrive * disable server side copy as it seems to be broken at Microsoft * v1.24 - 2015-11-07 * New features - * Add support for Microsoft One Drive + * Add support for Microsoft OneDrive * Add `--no-check-certificate` option to disable server certificate verification * Add async readahead buffer for faster transfer of big files * Fixes @@ -5858,11 +6347,11 @@ Sure! Rclone stores all of its config in a single file. If you want to find this file, the simplest way is to run `rclone -h` and look at the help for the `--config` flag which will tell you where it is. -See the [remote setup docs](http://rclone.org/remote_setup/) for more info. +See the [remote setup docs](https://rclone.org/remote_setup/) for more info. ### How do I configure rclone on a remote / headless box with no browser? ### -This has now been documented in its own [remote setup page](http://rclone.org/remote_setup/). +This has now been documented in its own [remote setup page](https://rclone.org/remote_setup/). ### Can rclone sync directly from drive to s3 ### @@ -5994,7 +6483,7 @@ This is free software under the terms of MIT the license (check the COPYING file included with the source code). ``` -Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/ +Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -6024,7 +6513,7 @@ Contributors ------------ * Alex Couper - * Leonid Shalupov + * Leonid Shalupov * Shimon Doodkin * Colin Nicholson * Klaus Post @@ -6072,6 +6561,28 @@ Contributors * Jack Schmidt * Dedsec1 * Hisham Zarka + * Jérôme Vizcaino + * Mike Tesch + * Marvin Watson + * Danny Tsai + * Yoni Jah + * Stephen Harris + * Ihor Dvoretskyi + * Jon Craton + * Hraban Luyat + * Michael Ledin + * Martin Kristensen + * Too Much IO + * Anisse Astier + * Zahiar Ahmed + * Igor Kharin + * Bill Zissimopoulos + * Bob Potter + * Steven Lu + * Sjur Fredriksen + * Ruwbin + * Fabian Möller + * Edward Q. Bridges # Contact the rclone project # diff --git a/MANUAL.txt b/MANUAL.txt index b9edce96d..924738f04 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Mar 18, 2017 +Jun 15, 2017 @@ -18,11 +18,12 @@ from - Dropbox - Google Cloud Storage - Amazon Drive -- Microsoft One Drive +- Microsoft OneDrive - Hubic - Backblaze B2 - Yandex Disk - SFTP +- FTP - The local filesystem Features @@ -69,7 +70,7 @@ Linux installation from precompiled binary Fetch and unpack - curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip + curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip unzip rclone-current-linux-amd64.zip cd rclone-*-linux-amd64 @@ -94,7 +95,7 @@ macOS installation from precompiled binary Download the latest version of rclone. - cd && curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip + cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip Unzip the download and cd to the extracted folder. @@ -115,7 +116,7 @@ Run rclone config to setup. See rclone config docs for more details. Install from source -Make sure you have at least Go 1.5 installed. Make sure your GOPATH is +Make sure you have at least Go 1.6 installed. Make sure your GOPATH is set, then: go get -u -v github.com/ncw/rclone @@ -189,7 +190,7 @@ Install the snap meta layer. openSUSE - sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy + sudo zypper addrepo https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy sudo zypper install snapd OpenWrt @@ -220,9 +221,10 @@ See the following for detailed instructions for - Amazon Drive - Backblaze B2 - Hubic -- Microsoft One Drive +- Microsoft OneDrive - Yandex Disk - SFTP +- FTP - Crypt - to encrypt other remotes @@ -441,7 +443,7 @@ remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data. - rclone check source:path dest:path + rclone check source:path dest:path [flags] Options @@ -627,7 +629,7 @@ Or rclone dedupe rename "drive:Google Photos" - rclone dedupe [mode] remote:path + rclone dedupe [mode] remote:path [flags] Options @@ -671,7 +673,7 @@ the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1. - rclone cat remote:path + rclone cat remote:path [flags] Options @@ -748,6 +750,19 @@ After it has run it will log the status of the encryptedremote:. rclone cryptcheck remote:path cryptedremote:path +rclone dbhashsum + +Produces a Dropbbox hash file for all the objects in the path. + +Synopsis + +Produces a Dropbox hash file for all the objects in the path. The hashes +are calculated according to Dropbox content hash rules. The output is in +the same format as md5sum and sha1sum. + + rclone dbhashsum remote:path + + rclone genautocomplete Output bash completion script for rclone. @@ -781,7 +796,11 @@ This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website. - rclone gendocs output_directory + rclone gendocs output_directory [flags] + +Options + + -h, --help help for gendocs rclone listremotes @@ -794,13 +813,48 @@ rclone listremotes lists all the available remotes from the config file. When uses with the -l flag it lists the types too. - rclone listremotes + rclone listremotes [flags] Options -l, --long Show the type as well as names. +rclone lsjson + +List directories and objects in the path in JSON format. + +Synopsis + +List directories and objects in the path in JSON format. + +The output is an array of Items, where each Item looks like this + +{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", +"MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : +"ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, +"IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", +"Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6 +} + +If --hash is not specified the the Hashes property won't be emitted. + +If --no-modtime is specified then ModTime will be blank. + +The time is in RFC3339 format with nanosecond precision. + +The whole output can be processed as a JSON blob, or alternatively it +can be processed line by line as each item is written one to a line. + + rclone lsjson remote:path [flags] + +Options + + --hash Include hashes in the output (may take longer). + --no-modtime Don't read the modification time (can speed things up). + -R, --recursive Recurse into the listing. + + rclone mount Mount the remote as a mountpoint. EXPERIMENTAL @@ -815,21 +869,20 @@ This is EXPERIMENTAL - use with care. First set up your remote using rclone config. Check it works with rclone ls etc. -Start the mount like this (note the & on the end to put rclone in the -background). +Start the mount like this - rclone mount remote:path/to/files /path/to/local/mount & + rclone mount remote:path/to/files /path/to/local/mount -Stop the mount with +When the program ends, either via Ctrl+C or receiving a SIGINT or +SIGTERM signal, the mount is automatically stopped. +The umount operation can fail, for example when the mountpoint is busy. +When that happens, it is the user's responsibility to stop the mount +manually with + + # Linux fusermount -u /path/to/local/mount - -Or if that fails try - - fusermount -z -u /path/to/local/mount - -Or with OS X - + # OS X umount /path/to/local/mount Limitations @@ -861,6 +914,20 @@ Filters Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. +Directory Cache + +Using the --dir-cache-time flag, you can set how long a directory should +be considered up to date and not refreshed from the backend. Changes +made locally in the mount may appear immediately or invalidate the +cache. However, changes done on the remote will only be picked up once +the cache expires. + +Alternatively, you can send a SIGHUP signal to rclone for it to flush +all directory caches, regardless of how old they are. Assuming only one +rclone instance is running, you can reset the cache like this: + + kill -SIGHUP $(pidof rclone) + Bugs - All the remotes should work for read, but some may not for write @@ -868,13 +935,7 @@ Bugs - maybe should pass in size as -1 to mean work it out - Or put in an an upload cache to cache the files on disk first -TODO - -- Check hashes on upload/download -- Preserve timestamps -- Move directories - - rclone mount remote:path /path/to/mountpoint + rclone mount remote:path /path/to/mountpoint [flags] Options @@ -886,8 +947,10 @@ Options --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --gid uint32 Override the gid field set by the filesystem. (default 502) --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) - --no-modtime Don't read the modification time (can speed things up). + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 502) --umask int Override the permission bits set by the filesystem. (default 2) @@ -931,6 +994,39 @@ flag. rclone moveto source:path dest:path +rclone ncdu + +Explore a remote with a text based user interface. + +Synopsis + +This displays a text based user interface allowing the navigation of a +remote. It is most useful for answering the question - "What is using +all my disk space?". + +To make the user interface it first scans the entire remote given and +builds an in memory representation. rclone ncdu can be used during this +scanning phase and you will see it building up the directory structure +as it goes along. + +Here are the keys - press '?' to toggle the help on and off + + ↑,↓ or k,j to Move + →,l to enter + ←,h to return + c toggle counts + g toggle graph + n,s,C sort by name,size,count + ? to toggle help on and off + q/ESC/c-C to quit + +This an homage to the ncdu tool but for rclone remotes. It is missing +lots of features at the moment, most importantly deleting files, but is +useful as it stands. + + rclone ncdu remote:path + + rclone obscure Obscure password for use in the rclone.conf @@ -1116,8 +1212,8 @@ In this example, the transfer bandwidth will be set to 512kBytes/sec at 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited. -Bandwidth limits only apply to the data transfer. The don't apply to the -bandwith of the directory listings etc. +Bandwidth limits only apply to the data transfer. They don't apply to +the bandwidth of the directory listings etc. Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say you @@ -1429,13 +1525,46 @@ Specifying --delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory. Specifying --delete-after (the default value) will delay deletion of -files until all new/updated files have been successfully transfered. The -files to be deleted are collected in the copy pass then deleted after -the copy pass has completed sucessfully. The files to be deleted are -held in memory so this mode may use more memory. This is the safest mode -as it will only delete files if there have been no errors subsequent to -that. If there have been errors before the deletions start then you will -get the message not deleting files as there were IO errors. +files until all new/updated files have been successfully transferred. +The files to be deleted are collected in the copy pass then deleted +after the copy pass has completed successfully. The files to be deleted +are held in memory so this mode may use more memory. This is the safest +mode as it will only delete files if there have been no errors +subsequent to that. If there have been errors before the deletions start +then you will get the message +not deleting files as there were IO errors. + +--fast-list + +When doing anything which involves a directory listing (eg sync, copy, +ls - in fact nearly every command), rclone normally lists a directory +and processes it before using more directory lists to process any +subdirectories. This can be parallelised and works very quickly using +the least amount of memory. + +However some remotes have a way of listing all files beneath a directory +in one (or a small number) of transactions. These tend to be the bucket +based remotes (eg s3, b2, gcs, swift, hubic). + +If you use the --fast-list flag then rclone will use this method for +listing directories. This will have the following consequences for the +listing: + +- It WILL use fewer transactions (important if you pay for them) +- It WILL use more memory. Rclone has to load the whole listing + into memory. +- It _may_ be faster because it uses fewer transactions +- It _may_ be slower because it can't be parallelized + +rclone should always give identical results with and without +--fast-list. + +If you pay for transactions and can fit your entire sync listing into +memory then --fast-list is recommended. If you have a very big sync to +do then don't use --fast-list otherwise you will run out of memory. + +If you use --fast-list on a remote which doesn't support it, then rclone +will just ignore it. --timeout=TIME @@ -1463,7 +1592,7 @@ updated if the sizes are different. On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these -remoes, rclone will skip any files which exist on the destination and +remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. @@ -1555,7 +1684,7 @@ this to a file called set-rclone-password: Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and -set it in the envonment variable. +set it in the environment variable. If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to @@ -1623,7 +1752,7 @@ If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time. -However if you are copying a large number of files, escpecially if you +However if you are copying a large number of files, especially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse. @@ -1698,7 +1827,7 @@ immediately before exiting. When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority -log message (visibile with -q) showing the message and which file caused +log message (visible with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority @@ -2274,20 +2403,21 @@ Features Here is an overview of the major features of each cloud storage system. - Name Hash ModTime Case Insensitive Duplicate Files MIME Type - ---------------------- ------ --------- ------------------ ----------------- ----------- - Google Drive MD5 Yes No Yes R/W - Amazon S3 MD5 Yes No No R/W - Openstack Swift MD5 Yes No No R/W - Dropbox - No Yes No R - Google Cloud Storage MD5 Yes No No R/W - Amazon Drive MD5 No Yes No R - Microsoft One Drive SHA1 Yes Yes No R - Hubic MD5 Yes No No R/W - Backblaze B2 SHA1 Yes No No R/W - Yandex Disk MD5 Yes No No R/W - SFTP - Yes Depends No - - The local filesystem All Yes Depends No - + Name Hash ModTime Case Insensitive Duplicate Files MIME Type + ---------------------- ---------- --------- ------------------ ----------------- ----------- + Google Drive MD5 Yes No Yes R/W + Amazon S3 MD5 Yes No No R/W + Openstack Swift MD5 Yes No No R/W + Dropbox DBHASH † Yes Yes No - + Google Cloud Storage MD5 Yes No No R/W + Amazon Drive MD5 No Yes No R + Microsoft OneDrive SHA1 Yes Yes No R + Hubic MD5 Yes No No R/W + Backblaze B2 SHA1 Yes No No R/W + Yandex Disk MD5 Yes No No R/W + SFTP - Yes Depends No - + FTP - No Yes No - + The local filesystem All Yes Depends No - Hash @@ -2299,6 +2429,9 @@ command. To use the checksum checks between filesystems they must support a common hash type. +† Note that Dropbox supports its own custom hash. This is an SHA256 sum +of all the 4MB block SHA256s. + ModTime The cloud storage system supports setting modification times on objects. @@ -2362,20 +2495,21 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. - Name Purge Copy Move DirMove CleanUp - ---------------------- ------- ------ ------ --------- --------- - Google Drive Yes Yes Yes Yes No #575 - Amazon S3 No Yes No No No - Openstack Swift Yes † Yes No No No - Dropbox Yes Yes Yes Yes No #575 - Google Cloud Storage Yes Yes No No No - Amazon Drive Yes No Yes Yes No #575 - Microsoft One Drive Yes Yes Yes No #197 No #575 - Hubic Yes † Yes No No No - Backblaze B2 No No No No Yes - Yandex Disk Yes No No No No #575 - SFTP No No Yes Yes No - The local filesystem Yes No Yes Yes No + Name Purge Copy Move DirMove CleanUp ListR + ---------------------- ------- ------ ------ --------- --------- ------- + Google Drive Yes Yes Yes Yes No #575 No + Amazon S3 No Yes No No No Yes + Openstack Swift Yes † Yes No No No Yes + Dropbox Yes Yes Yes Yes No #575 No + Google Cloud Storage Yes Yes No No No Yes + Amazon Drive Yes No Yes Yes No #575 No + Microsoft OneDrive Yes Yes Yes No #197 No #575 No + Hubic Yes † Yes No No No Yes + Backblaze B2 No No No No Yes Yes + Yandex Disk Yes No No No No #575 Yes + SFTP No No Yes Yes No No + FTP No No Yes Yes No No + The local filesystem Yes No Yes Yes No No Purge @@ -2418,6 +2552,12 @@ This is used for emptying the trash for a remote by rclone cleanup. If the server can't do CleanUp then rclone cleanup will return an error. +ListR + +The remote supports a recursive list to list all the contents beneath a +directory quickly. This enables the --fast-list flag to work. See the +rclone docs for more details. + Google Drive @@ -2435,10 +2575,13 @@ Here is an example of how to make a remote called remote. First run: This will guide you through an interactive setup process: + No remotes found - make a new one n) New remote - d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password q) Quit config - e/n/d/q> n + n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value @@ -2452,27 +2595,29 @@ This will guide you through an interactive setup process: \ "dropbox" 5 / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive + 8 / Google Drive \ "drive" - 8 / Hubic + 9 / Hubic \ "hubic" - 9 / Local Disk + 10 / Local Disk \ "local" - 10 / Microsoft OneDrive + 11 / Microsoft OneDrive \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" - 12 / SSH/SFTP Connection + 13 / SSH/SFTP Connection \ "sftp" - 13 / Yandex Disk + 14 / Yandex Disk \ "yandex" - Storage> 7 + Storage> 8 Google Application Client Id - leave blank normally. - client_id> + client_id> Google Application Client Secret - leave blank normally. - client_secret> + client_secret> Remote config Use auto config? * Say Y if not sure @@ -2484,10 +2629,14 @@ This will guide you through an interactive setup process: Log in and authorize rclone for access Waiting for code... Got code + Configure this as a team drive? + y) Yes + n) No + y/n> n -------------------- [remote] - client_id = - client_secret = + client_id = + client_secret = token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} -------------------- y) Yes this is OK @@ -2516,6 +2665,42 @@ To copy a local directory to a drive directory called backup rclone copy /home/source remote:backup +Team drives + +If you want to configure the remote to point to a Google Team Drive then +answer y to the question Configure this as a team drive?. + +This will fetch the list of Team Drives from google and allow you to +configure which one you want to use. You can also type in a team drive +ID if you prefer. + +For example: + + Configure this as a team drive? + y) Yes + n) No + y/n> y + Fetching team drive list... + Choose a number from below, or type in your own value + 1 / Rclone Test + \ "xxxxxxxxxxxxxxxxxxxx" + 2 / Rclone Test 2 + \ "yyyyyyyyyyyyyyyyyyyy" + 3 / Rclone Test 3 + \ "zzzzzzzzzzzzzzzzzzzz" + Enter a Team Drive ID> 1 + -------------------- + [remote] + client_id = + client_secret = + token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} + team_drive = xxxxxxxxxxxxxxxxxxxx + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + Modified time Google drive stores modification times accurate to 1 ms. @@ -2864,7 +3049,7 @@ This will guide you through an interactive setup process. \ "sa-east-1" location_constraint> 1 Canned ACL used when creating buckets and/or storing objects in S3. - For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" @@ -2937,6 +3122,12 @@ files in the bucket. rclone sync /home/local/directory remote:bucket +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + Modified time The modified time is stored as metadata on the object as @@ -2973,6 +3164,52 @@ order of precedence: If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below). +S3 Permissions + +When using the sync subcommand of rclone the following minimum +permissions are required to be available on the bucket being written to: + +- ListBucket +- DeleteObject +- GetObject +- PutObject +- PutObjectACL + +Example policy: + + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" + }, + "Action": [ + "s3:ListBucket", + "s3:DeleteObject", + "s3:GetObject", + "s3:PutObject", + "s3:PutObjectAcl" + ], + "Resource": [ + "arn:aws:s3:::BUCKET_NAME/*", + "arn:aws:s3:::BUCKET_NAME" + ] + } + ] + } + +Notes on above: + +1. This is a policy that can be used when creating bucket. It assumes + that USER_NAME has been created. +2. The Resource entry must include both resource ARNs, as one implies + the bucket and the other implies the bucket's objects. + +For reference, here's an Ansible script that will generate one or more +buckets that will work with rclone sync. + Specific options Here are the command line options specific to this cloud storage system. @@ -3275,6 +3512,12 @@ example above. Note that you may (or may not) need to set region too - try without first. +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + Specific options Here are the command line options specific to this cloud storage system. @@ -3402,20 +3645,17 @@ To copy a local directory to a dropbox directory called backup rclone copy /home/source remote:backup -Modified time and MD5SUMs +Modified time and Hashes -Dropbox doesn't provide the ability to set modification times in the V1 -public API, so rclone can't support modified time with Dropbox. +Dropbox supports modified times, but the only way to set a modification +time is to re-upload the file. -This may change in the future - see these issues for details: +This means that if you uploaded your data with an older version of +rclone which didn't support the v2 API and modified times, rclone will +decide to upload all your old data to fix the modification times. If you +don't want this to happen use --size-only or --checksum flag to stop it. -- Dropbox V2 API -- Allow syncs for remotes that can't set modtime on existing objects - -Dropbox doesn't return any sort of checksum (MD5 or SHA1). - -Together that means that syncs to dropbox will effectively have the ---size-only flag set. +Dropbox supports its own hash type which is checked for all transfers. Specific options @@ -3440,7 +3680,7 @@ those file names, but the sync won't fail. If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As -a work-around do an rclone delete dropbix:dir followed by an +a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir. @@ -3600,6 +3840,12 @@ To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + Modified time Google google cloud storage stores md5sums natively and rclone stores @@ -3617,16 +3863,35 @@ The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it. +The configuration process for Amazon Drive may involve using an oauth +proxy. This is used to keep the Amazon credentials out of the source +code. The proxy runs in Google's very secure App Engine environment and +doesn't store any credentials which pass through it. + +NB rclone doesn't not currently have its own Amazon Drive credentials +(see the forum for why) so you will either need to have your own +client_id and client_secret with Amazon Drive, or use a a third party +ouath proxy in which case you will need to enter client_id, +client_secret, auth_url and token_url. + +Note also if you are not using Amazon's auth_url and token_url, (ie you +filled in something for those) then if setting up on a remote machine +you can only use the copying the config method of configuration - +rclone authorize will not work. + Here is an example of how to make a remote called remote. First run: rclone config This will guide you through an interactive setup process: + No remotes found - make a new one n) New remote - d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password q) Quit config - e/n/d/q> n + n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value @@ -3640,28 +3905,35 @@ This will guide you through an interactive setup process: \ "dropbox" 5 / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive + 8 / Google Drive \ "drive" - 8 / Hubic + 9 / Hubic \ "hubic" - 9 / Local Disk + 10 / Local Disk \ "local" - 10 / Microsoft OneDrive + 11 / Microsoft OneDrive \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" - 12 / SSH/SFTP Connection + 13 / SSH/SFTP Connection \ "sftp" - 13 / Yandex Disk + 14 / Yandex Disk \ "yandex" Storage> 1 - Amazon Application Client Id - leave blank normally. - client_id> - Amazon Application Client Secret - leave blank normally. - client_secret> + Amazon Application Client Id - required. + client_id> your client ID goes here + Amazon Application Client Secret - required. + client_secret> your client secret goes here + Auth server URL - leave blank to use Amazon's. + auth_url> Optional auth URL + Token server url - leave blank to use Amazon's. + token_url> Optional token URL Remote config + Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine @@ -3674,8 +3946,10 @@ This will guide you through an interactive setup process: Got code -------------------- [remote] - client_id = - client_secret = + client_id = your client ID goes here + client_secret = your client secret goes here + auth_url = Optional auth URL + token_url = Optional token URL token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK @@ -3781,7 +4055,7 @@ larger than this will fail. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail. -Unfortunatly there is no way for rclone to see that this failure is +Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split @@ -4038,6 +4312,12 @@ you need to copy your files to the default directory rclone copy /home/source remote:default/backup +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + Modified time The modified time is stored as metadata on the object as @@ -4145,6 +4425,12 @@ files in the bucket. rclone sync /home/local/directory remote:bucket +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + Modified time The modified time is stored as metadata on the object as @@ -4431,6 +4717,12 @@ in the path. rclone sync /home/local/directory remote:directory +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + Modified time Modified times are supported and are stored accurate to 1 ns in custom @@ -4479,23 +4771,25 @@ which you can get from the SFTP control panel. \ "dropbox" 5 / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive + 8 / Google Drive \ "drive" - 8 / Hubic + 9 / Hubic \ "hubic" - 9 / Local Disk + 10 / Local Disk \ "local" - 10 / Microsoft OneDrive + 11 / Microsoft OneDrive \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" - 12 / SSH/SFTP Connection + 13 / SSH/SFTP Connection \ "sftp" - 13 / Yandex Disk + 14 / Yandex Disk \ "yandex" - Storage> 12 + Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com @@ -4503,7 +4797,7 @@ which you can get from the SFTP control panel. host> example.com SSH username, leave blank for current username, ncw user> - SSH port + SSH port, leave blank to use default (22) port> SSH password, leave blank to use ssh-agent y) Yes type in my own password @@ -4552,6 +4846,8 @@ Limitations SFTP does not support any checksums. +The only ssh agent supported under Windows is Putty's pagent. + SFTP isn't supported under plan9 until this issue is fixed. Note that since SFTP isn't HTTP based the following flags don't work @@ -4625,6 +4921,8 @@ differentiate it from the remote. \ "off" 2 / Encrypt the filenames see the docs for the details. \ "standard" + 3 / Very simple filename obfuscation. + \ "obfuscate" filename_encryption> 2 Password or pass phrase for encryption. y) Yes type in my own password @@ -4770,6 +5068,27 @@ Standard - identical files names will have identical uploaded names - can use shortcuts to shorten the directory recursion +Obfuscation + +This is a simple "rotate" of the filename, with each file having a rot +distance based on the filename. We store the distance at the beginning +of the filename. So a file called "hello" may become "53.jgnnq" + +This is not a strong encryption of filenames, but it may stop automated +scanning tools from picking up on filename patterns. As such it's an +intermediate between "off" and "standard". The advantage is that it +allows for longer path segment names. + +There is a possibility with some unicode based filenames that the +obfuscation is weak and may map lower case characters to upper case +equivalents. You can not rely on this for strong protection. + +- file names very lightly obfuscated +- file names can be longer than standard encryption +- can use sub paths and copy single files +- directory structure visibile +- identical files names will have identical uploaded names + Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in @@ -4932,6 +5251,127 @@ encrypted data. For full protection agains this you should always use a salt. +FTP + +FTP is the File Transfer Protocol. FTP support is provided using the +github.com/jlaffaye/ftp package. + +Here is an example of making an FTP configuration. First run + + rclone config + +This will guide you through an interactive setup process. An FTP remote +only needs a host together with and a username and a password. With +anonymous FTP server, you will need to use anonymous as username and +your email address as the password. + + No remotes found - make a new one + n) New remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + n/r/c/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" + 10 / Local Disk + \ "local" + 11 / Microsoft OneDrive + \ "onedrive" + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 13 / SSH/SFTP Connection + \ "sftp" + 14 / Yandex Disk + \ "yandex" + Storage> ftp + FTP host to connect to + Choose a number from below, or type in your own value + 1 / Connect to ftp.example.com + \ "ftp.example.com" + host> ftp.example.com + FTP username, leave blank for current username, ncw + user> + FTP port, leave blank to use default (21) + port> + FTP password + y) Yes type in my own password + g) Generate random password + y/g> y + Enter the password: + password: + Confirm the password: + password: + Remote config + -------------------- + [remote] + host = ftp.example.com + user = + port = + pass = *** ENCRYPTED *** + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called remote and can now be used like this + +See all directories in the home directory + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:path/to/directory + +List the contents of a directory + + rclone ls remote:path/to/directory + +Sync /home/local/directory to the remote directory, deleting any excess +files in the directory. + + rclone sync /home/local/directory remote:directory + +Modified time + +FTP does not support modified times. Any times you see on the server +will be time of upload. + +Checksums + +FTP does not support any checksums. + +Limitations + +Note that since FTP isn't HTTP based the following flags don't work with +it: --dump-headers, --dump-bodies, --dump-auth + +Note that --timeout isn't supported (but --contimeout is). + +FTP could support server side move but doesn't yet. + + Local Filesystem Local paths are specified as normal filesystem paths, eg @@ -5037,6 +5477,19 @@ and 6 b/two 6 b/one +--no-local-unicode-normalization + +By default rclone normalizes (NFC) the unicode representation of +filenames and directories. This flag disables that normalization and +uses the same representation as the local filesystem. + +This can be useful if you need to retain the local unicode +representation and you are using a cloud provider which supports +unnormalized names (e.g. S3 or ACD). + +This should also work with any provider if you are using crypt and have +file name encryption (the default) or obfuscation turned on. + --one-file-system, -x This tells rclone to stay in the filesystem specified by the root and @@ -5531,7 +5984,7 @@ Changelog - Upload releases to github too - Swift - Fix sync for chunked files - - One Drive + - OneDrive - Re-enable server side copy - Don't mask HTTP error codes with JSON decode error - S3 @@ -5548,11 +6001,11 @@ Changelog - Stop SetModTime losing metadata (eg X-Object-Manifest) - This could have caused data loss for files > 5GB in size - Use ContentType from Object to avoid lookups in listings - - One Drive + - OneDrive - disable server side copy as it seems to be broken at Microsoft - v1.24 - 2015-11-07 - New features - - Add support for Microsoft One Drive + - Add support for Microsoft OneDrive - Add --no-check-certificate option to disable server certificate verification - Add async readahead buffer for faster transfer of big files @@ -5932,7 +6385,7 @@ License This is free software under the terms of MIT the license (check the COPYING file included with the source code). - Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/ + Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -5961,7 +6414,7 @@ Authors Contributors - Alex Couper amcouper@gmail.com -- Leonid Shalupov leonid@shalupov.com +- Leonid Shalupov leonid@shalupov.com shalupov@diverse.org.ru - Shimon Doodkin helpmepro1@gmail.com - Colin Nicholson colin@colinn.com - Klaus Post klauspost@gmail.com @@ -6009,6 +6462,28 @@ Contributors - Jack Schmidt github@mowsey.org - Dedsec1 Dedsec1@users.noreply.github.com - Hisham Zarka hzarka@gmail.com +- Jérôme Vizcaino jerome.vizcaino@gmail.com +- Mike Tesch mjt6129@rit.edu +- Marvin Watson marvwatson@users.noreply.github.com +- Danny Tsai danny8376@gmail.com +- Yoni Jah yonjah+git@gmail.com yonjah+github@gmail.com +- Stephen Harris github@spuddy.org +- Ihor Dvoretskyi ihor.dvoretskyi@gmail.com +- Jon Craton jncraton@gmail.com +- Hraban Luyat hraban@0brg.net +- Michael Ledin mledin89@gmail.com +- Martin Kristensen me@azgul.com +- Too Much IO toomuchio@users.noreply.github.com +- Anisse Astier anisse@astier.eu +- Zahiar Ahmed zahiar@live.com +- Igor Kharin igorkharin@gmail.com +- Bill Zissimopoulos billziss@navimatics.com +- Bob Potter bobby.potter@gmail.com +- Steven Lu tacticalazn@gmail.com +- Sjur Fredriksen sjurtf@ifi.uio.no +- Ruwbin hubus12345@gmail.com +- Fabian Möller fabianm88@gmail.com +- Edward Q. Bridges github@eqbridges.com diff --git a/bin/make_manual.py b/bin/make_manual.py index 5478a8e84..8c38ba8b2 100755 --- a/bin/make_manual.py +++ b/bin/make_manual.py @@ -31,6 +31,7 @@ docs = [ "yandex.md", "sftp.md", "crypt.md", + "ftp.md", "local.md", "changelog.md", "bugs.md", diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index 58fc570ef..9716cdd4b 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -1,19 +1,19 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone" slug: rclone url: /commands/rclone/ --- ## rclone -Sync files and directories to and from local and remote object stores - v1.36 +Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 ### Synopsis Rclone is a command line program to sync files and directories to and -from various cloud storage systems, such as: +from various cloud storage systems and using file transfer services, such as: * Google Drive * Amazon S3 @@ -25,6 +25,8 @@ from various cloud storage systems, such as: * Hubic * Backblaze B2 * Yandex Disk + * SFTP + * FTP * The local filesystem Features @@ -44,7 +46,7 @@ and configuration walkthroughs. ``` -rclone +rclone [flags] ``` ### Options @@ -76,6 +78,7 @@ rclone --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -87,6 +90,7 @@ rclone --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -96,6 +100,7 @@ rclone -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -110,7 +115,7 @@ rclone --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -142,6 +147,7 @@ rclone * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied * [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integritity of a crypted remote. +* [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbbox hash file for all the objects in the path. * [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them. * [rclone delete](/commands/rclone_delete/) - Remove the contents of path. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone. @@ -149,12 +155,14 @@ rclone * [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file. * [rclone ls](/commands/rclone_ls/) - List all the objects in the path with size and path. * [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path. +* [rclone lsjson](/commands/rclone_lsjson/) - List directories and objects in the path in JSON format. * [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path. * [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path. * [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist. * [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL** * [rclone move](/commands/rclone_move/) - Move files from source to dest. * [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest. +* [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface. * [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone.conf * [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. * [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty. @@ -164,4 +172,4 @@ rclone * [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. * [rclone version](/commands/rclone_version/) - Show the version number. -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md index 87637fb2d..58bc15dc9 100644 --- a/docs/content/commands/rclone_authorize.md +++ b/docs/content/commands/rclone_authorize.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone authorize" slug: rclone_authorize url: /commands/rclone_authorize/ @@ -49,6 +49,7 @@ rclone authorize --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -60,6 +61,7 @@ rclone authorize --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -69,6 +71,7 @@ rclone authorize -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -83,7 +86,7 @@ rclone authorize --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -106,6 +109,6 @@ rclone authorize ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md index 1aab3a12b..0c60667a4 100644 --- a/docs/content/commands/rclone_cat.md +++ b/docs/content/commands/rclone_cat.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone cat" slug: rclone_cat url: /commands/rclone_cat/ @@ -33,7 +33,7 @@ Note that if offset is negative it will count from the end, so ``` -rclone cat remote:path +rclone cat remote:path [flags] ``` ### Options @@ -75,6 +75,7 @@ rclone cat remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -86,6 +87,7 @@ rclone cat remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -95,6 +97,7 @@ rclone cat remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -109,7 +112,7 @@ rclone cat remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -132,6 +135,6 @@ rclone cat remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md index 09bdc0705..8e637eb58 100644 --- a/docs/content/commands/rclone_check.md +++ b/docs/content/commands/rclone_check.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone check" slug: rclone_check url: /commands/rclone_check/ @@ -26,7 +26,7 @@ to check all the data. ``` -rclone check source:path dest:path +rclone check source:path dest:path [flags] ``` ### Options @@ -64,6 +64,7 @@ rclone check source:path dest:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -75,6 +76,7 @@ rclone check source:path dest:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -84,6 +86,7 @@ rclone check source:path dest:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -98,7 +101,7 @@ rclone check source:path dest:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -121,6 +124,6 @@ rclone check source:path dest:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md index 332bd6c9e..5bac515bf 100644 --- a/docs/content/commands/rclone_cleanup.md +++ b/docs/content/commands/rclone_cleanup.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone cleanup" slug: rclone_cleanup url: /commands/rclone_cleanup/ @@ -49,6 +49,7 @@ rclone cleanup remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -60,6 +61,7 @@ rclone cleanup remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -69,6 +71,7 @@ rclone cleanup remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -83,7 +86,7 @@ rclone cleanup remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -106,6 +109,6 @@ rclone cleanup remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md index d0a8e50cc..3ca3ac946 100644 --- a/docs/content/commands/rclone_config.md +++ b/docs/content/commands/rclone_config.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone config" slug: rclone_config url: /commands/rclone_config/ @@ -46,6 +46,7 @@ rclone config --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone config --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone config -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone config --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone config ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index ad1bf7ffa..d4df4d441 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone copy" slug: rclone_copy url: /commands/rclone_copy/ @@ -85,6 +85,7 @@ rclone copy source:path dest:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -96,6 +97,7 @@ rclone copy source:path dest:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -105,6 +107,7 @@ rclone copy source:path dest:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -119,7 +122,7 @@ rclone copy source:path dest:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -142,6 +145,6 @@ rclone copy source:path dest:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index cd1ca8cfc..23ee1be35 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone copyto" slug: rclone_copyto url: /commands/rclone_copyto/ @@ -72,6 +72,7 @@ rclone copyto source:path dest:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -83,6 +84,7 @@ rclone copyto source:path dest:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -92,6 +94,7 @@ rclone copyto source:path dest:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -106,7 +109,7 @@ rclone copyto source:path dest:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -129,6 +132,6 @@ rclone copyto source:path dest:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md index b1923a12b..d94f9eb08 100644 --- a/docs/content/commands/rclone_cryptcheck.md +++ b/docs/content/commands/rclone_cryptcheck.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone cryptcheck" slug: rclone_cryptcheck url: /commands/rclone_cryptcheck/ @@ -69,6 +69,7 @@ rclone cryptcheck remote:path cryptedremote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -80,6 +81,7 @@ rclone cryptcheck remote:path cryptedremote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -89,6 +91,7 @@ rclone cryptcheck remote:path cryptedremote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -103,7 +106,7 @@ rclone cryptcheck remote:path cryptedremote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -126,6 +129,6 @@ rclone cryptcheck remote:path cryptedremote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md new file mode 100644 index 000000000..70d97ba5c --- /dev/null +++ b/docs/content/commands/rclone_dbhashsum.md @@ -0,0 +1,116 @@ +--- +date: 2017-06-15T20:06:09+01:00 +title: "rclone dbhashsum" +slug: rclone_dbhashsum +url: /commands/rclone_dbhashsum/ +--- +## rclone dbhashsum + +Produces a Dropbbox hash file for all the objects in the path. + +### Synopsis + + + +Produces a Dropbox hash file for all the objects in the path. The +hashes are calculated according to [Dropbox content hash +rules](https://www.dropbox.com/developers/reference/content-hash). +The output is in the same format as md5sum and sha1sum. + + +``` +rclone dbhashsum remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-auth Dump HTTP headers with auth info + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ignore-checksum Skip post copy check of checksums. + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --old-sync-method Deprecated - use --fast-list instead + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose count[=-1] Print lots more stuff (repeat for more) +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 + +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md index 95bda7a45..8392e3094 100644 --- a/docs/content/commands/rclone_dedupe.md +++ b/docs/content/commands/rclone_dedupe.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone dedupe" slug: rclone_dedupe url: /commands/rclone_dedupe/ @@ -89,7 +89,7 @@ Or ``` -rclone dedupe [mode] remote:path +rclone dedupe [mode] remote:path [flags] ``` ### Options @@ -127,6 +127,7 @@ rclone dedupe [mode] remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -138,6 +139,7 @@ rclone dedupe [mode] remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -147,6 +149,7 @@ rclone dedupe [mode] remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -161,7 +164,7 @@ rclone dedupe [mode] remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -184,6 +187,6 @@ rclone dedupe [mode] remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md index 77b278567..5c1c32dab 100644 --- a/docs/content/commands/rclone_delete.md +++ b/docs/content/commands/rclone_delete.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone delete" slug: rclone_delete url: /commands/rclone_delete/ @@ -63,6 +63,7 @@ rclone delete remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -74,6 +75,7 @@ rclone delete remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -83,6 +85,7 @@ rclone delete remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -97,7 +100,7 @@ rclone delete remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -120,6 +123,6 @@ rclone delete remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md index a69c50b14..77f08b228 100644 --- a/docs/content/commands/rclone_genautocomplete.md +++ b/docs/content/commands/rclone_genautocomplete.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone genautocomplete" slug: rclone_genautocomplete url: /commands/rclone_genautocomplete/ @@ -61,6 +61,7 @@ rclone genautocomplete [output_file] --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -72,6 +73,7 @@ rclone genautocomplete [output_file] --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -81,6 +83,7 @@ rclone genautocomplete [output_file] -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -95,7 +98,7 @@ rclone genautocomplete [output_file] --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -118,6 +121,6 @@ rclone genautocomplete [output_file] ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md index afba37395..0059dbe71 100644 --- a/docs/content/commands/rclone_gendocs.md +++ b/docs/content/commands/rclone_gendocs.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone gendocs" slug: rclone_gendocs url: /commands/rclone_gendocs/ @@ -17,7 +17,13 @@ supplied. These are in a format suitable for hugo to render into the rclone.org website. ``` -rclone gendocs output_directory +rclone gendocs output_directory [flags] +``` + +### Options + +``` + -h, --help help for gendocs ``` ### Options inherited from parent commands @@ -49,6 +55,7 @@ rclone gendocs output_directory --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -60,6 +67,7 @@ rclone gendocs output_directory --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -69,6 +77,7 @@ rclone gendocs output_directory -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -83,7 +92,7 @@ rclone gendocs output_directory --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -106,6 +115,6 @@ rclone gendocs output_directory ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index ff7986512..11b2e1fd9 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone listremotes" slug: rclone_listremotes url: /commands/rclone_listremotes/ @@ -18,7 +18,7 @@ When uses with the -l flag it lists the types too. ``` -rclone listremotes +rclone listremotes [flags] ``` ### Options @@ -56,6 +56,7 @@ rclone listremotes --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -67,6 +68,7 @@ rclone listremotes --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -76,6 +78,7 @@ rclone listremotes -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -90,7 +93,7 @@ rclone listremotes --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -113,6 +116,6 @@ rclone listremotes ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md index ec3ece315..5ed286409 100644 --- a/docs/content/commands/rclone_ls.md +++ b/docs/content/commands/rclone_ls.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone ls" slug: rclone_ls url: /commands/rclone_ls/ @@ -46,6 +46,7 @@ rclone ls remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone ls remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone ls remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone ls remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone ls remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md index 4c339979f..75c4a3752 100644 --- a/docs/content/commands/rclone_lsd.md +++ b/docs/content/commands/rclone_lsd.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone lsd" slug: rclone_lsd url: /commands/rclone_lsd/ @@ -46,6 +46,7 @@ rclone lsd remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone lsd remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone lsd remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone lsd remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone lsd remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md new file mode 100644 index 000000000..17440f53e --- /dev/null +++ b/docs/content/commands/rclone_lsjson.md @@ -0,0 +1,144 @@ +--- +date: 2017-06-15T20:06:09+01:00 +title: "rclone lsjson" +slug: rclone_lsjson +url: /commands/rclone_lsjson/ +--- +## rclone lsjson + +List directories and objects in the path in JSON format. + +### Synopsis + + +List directories and objects in the path in JSON format. + +The output is an array of Items, where each Item looks like this + + { + "Hashes" : { + "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", + "MD5" : "b1946ac92492d2347c6235b4d2611184", + "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" + }, + "IsDir" : false, + "ModTime" : "2017-05-31T16:15:57.034468261+01:00", + "Name" : "file.txt", + "Path" : "full/path/goes/here/file.txt", + "Size" : 6 + } + +If --hash is not specified the the Hashes property won't be emitted. + +If --no-modtime is specified then ModTime will be blank. + +The time is in RFC3339 format with nanosecond precision. + +The whole output can be processed as a JSON blob, or alternatively it +can be processed line by line as each item is written one to a line. + + +``` +rclone lsjson remote:path [flags] +``` + +### Options + +``` + --hash Include hashes in the output (may take longer). + --no-modtime Don't read the modification time (can speed things up). + -R, --recursive Recurse into the listing. +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-auth Dump HTTP headers with auth info + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ignore-checksum Skip post copy check of checksums. + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --old-sync-method Deprecated - use --fast-list instead + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose count[=-1] Print lots more stuff (repeat for more) +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 + +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md index cde9e050f..80dbcf793 100644 --- a/docs/content/commands/rclone_lsl.md +++ b/docs/content/commands/rclone_lsl.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone lsl" slug: rclone_lsl url: /commands/rclone_lsl/ @@ -46,6 +46,7 @@ rclone lsl remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone lsl remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone lsl remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone lsl remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone lsl remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md index b7cedb16c..6425609e7 100644 --- a/docs/content/commands/rclone_md5sum.md +++ b/docs/content/commands/rclone_md5sum.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone md5sum" slug: rclone_md5sum url: /commands/rclone_md5sum/ @@ -49,6 +49,7 @@ rclone md5sum remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -60,6 +61,7 @@ rclone md5sum remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -69,6 +71,7 @@ rclone md5sum remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -83,7 +86,7 @@ rclone md5sum remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -106,6 +109,6 @@ rclone md5sum remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md index d8e70694b..9aa52a15c 100644 --- a/docs/content/commands/rclone_mkdir.md +++ b/docs/content/commands/rclone_mkdir.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone mkdir" slug: rclone_mkdir url: /commands/rclone_mkdir/ @@ -46,6 +46,7 @@ rclone mkdir remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone mkdir remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone mkdir remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone mkdir remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone mkdir remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index 0ba918e67..3833e9674 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone mount" slug: rclone_mount url: /commands/rclone_mount/ @@ -19,20 +19,19 @@ This is **EXPERIMENTAL** - use with care. First set up your remote using `rclone config`. Check it works with `rclone ls` etc. -Start the mount like this (note the & on the end to put rclone in the background). +Start the mount like this - rclone mount remote:path/to/files /path/to/local/mount & + rclone mount remote:path/to/files /path/to/local/mount -Stop the mount with +When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, +the mount is automatically stopped. +The umount operation can fail, for example when the mountpoint is busy. +When that happens, it is the user's responsibility to stop the mount manually with + + # Linux fusermount -u /path/to/local/mount - -Or if that fails try - - fusermount -z -u /path/to/local/mount - -Or with OS X - + # OS X umount /path/to/local/mount ### Limitations ### @@ -65,6 +64,21 @@ mount won't do that, so will be less reliable than the rclone command. Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. +### Directory Cache ### + +Using the `--dir-cache-time` flag, you can set how long a +directory should be considered up to date and not refreshed from the +backend. Changes made locally in the mount may appear immediately or +invalidate the cache. However, changes done on the remote will only +be picked up once the cache expires. + +Alternatively, you can send a `SIGHUP` signal to rclone for +it to flush all directory caches, regardless of how old they are. +Assuming only one rclone instance is running, you can reset the cache +like this: + + kill -SIGHUP $(pidof rclone) + ### Bugs ### * All the remotes should work for read, but some may not for write @@ -72,15 +86,9 @@ files to be visible in the mount. * maybe should pass in size as -1 to mean work it out * Or put in an an upload cache to cache the files on disk first -### TODO ### - - * Check hashes on upload/download - * Preserve timestamps - * Move directories - ``` -rclone mount remote:path /path/to/mountpoint +rclone mount remote:path /path/to/mountpoint [flags] ``` ### Options @@ -94,8 +102,10 @@ rclone mount remote:path /path/to/mountpoint --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --gid uint32 Override the gid field set by the filesystem. (default 502) --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) - --no-modtime Don't read the modification time (can speed things up). + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s) --read-only Mount read-only. --uid uint32 Override the uid field set by the filesystem. (default 502) --umask int Override the permission bits set by the filesystem. (default 2) @@ -131,6 +141,7 @@ rclone mount remote:path /path/to/mountpoint --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -142,6 +153,7 @@ rclone mount remote:path /path/to/mountpoint --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -151,6 +163,7 @@ rclone mount remote:path /path/to/mountpoint -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -165,7 +178,7 @@ rclone mount remote:path /path/to/mountpoint --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -188,6 +201,6 @@ rclone mount remote:path /path/to/mountpoint ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index 4079f3a91..b985116fc 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone move" slug: rclone_move url: /commands/rclone_move/ @@ -63,6 +63,7 @@ rclone move source:path dest:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -74,6 +75,7 @@ rclone move source:path dest:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -83,6 +85,7 @@ rclone move source:path dest:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -97,7 +100,7 @@ rclone move source:path dest:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -120,6 +123,6 @@ rclone move source:path dest:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index a3d9ff241..e4e008920 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone moveto" slug: rclone_moveto url: /commands/rclone_moveto/ @@ -75,6 +75,7 @@ rclone moveto source:path dest:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -86,6 +87,7 @@ rclone moveto source:path dest:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -95,6 +97,7 @@ rclone moveto source:path dest:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -109,7 +112,7 @@ rclone moveto source:path dest:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -132,6 +135,6 @@ rclone moveto source:path dest:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md new file mode 100644 index 000000000..54e116934 --- /dev/null +++ b/docs/content/commands/rclone_ncdu.md @@ -0,0 +1,135 @@ +--- +date: 2017-06-15T20:06:09+01:00 +title: "rclone ncdu" +slug: rclone_ncdu +url: /commands/rclone_ncdu/ +--- +## rclone ncdu + +Explore a remote with a text based user interface. + +### Synopsis + + + +This displays a text based user interface allowing the navigation of a +remote. It is most useful for answering the question - "What is using +all my disk space?". + +To make the user interface it first scans the entire remote given and +builds an in memory representation. rclone ncdu can be used during +this scanning phase and you will see it building up the directory +structure as it goes along. + +Here are the keys - press '?' to toggle the help on and off + + ↑,↓ or k,j to Move + →,l to enter + ←,h to return + c toggle counts + g toggle graph + n,s,C sort by name,size,count + ? to toggle help on and off + q/ESC/c-C to quit + +This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for +rclone remotes. It is missing lots of features at the moment, most +importantly deleting files, but is useful as it stands. + + +``` +rclone ncdu remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-auth Dump HTTP headers with auth info + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ignore-checksum Skip post copy check of checksums. + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --old-sync-method Deprecated - use --fast-list instead + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose count[=-1] Print lots more stuff (repeat for more) +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 + +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md index 20d7da8ec..a28662331 100644 --- a/docs/content/commands/rclone_obscure.md +++ b/docs/content/commands/rclone_obscure.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone obscure" slug: rclone_obscure url: /commands/rclone_obscure/ @@ -46,6 +46,7 @@ rclone obscure password --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone obscure password --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone obscure password -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone obscure password --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone obscure password ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md index bfc8b9604..e9ea9d5bf 100644 --- a/docs/content/commands/rclone_purge.md +++ b/docs/content/commands/rclone_purge.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone purge" slug: rclone_purge url: /commands/rclone_purge/ @@ -50,6 +50,7 @@ rclone purge remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -61,6 +62,7 @@ rclone purge remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -70,6 +72,7 @@ rclone purge remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -84,7 +87,7 @@ rclone purge remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -107,6 +110,6 @@ rclone purge remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md index fe21682cc..ee290487a 100644 --- a/docs/content/commands/rclone_rmdir.md +++ b/docs/content/commands/rclone_rmdir.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone rmdir" slug: rclone_rmdir url: /commands/rclone_rmdir/ @@ -48,6 +48,7 @@ rclone rmdir remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -59,6 +60,7 @@ rclone rmdir remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -68,6 +70,7 @@ rclone rmdir remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -82,7 +85,7 @@ rclone rmdir remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -105,6 +108,6 @@ rclone rmdir remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index a6908c59c..b30872e0d 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone rmdirs" slug: rclone_rmdirs url: /commands/rclone_rmdirs/ @@ -53,6 +53,7 @@ rclone rmdirs remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -64,6 +65,7 @@ rclone rmdirs remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -73,6 +75,7 @@ rclone rmdirs remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -87,7 +90,7 @@ rclone rmdirs remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -110,6 +113,6 @@ rclone rmdirs remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md index 3a568c7d5..0fb53e286 100644 --- a/docs/content/commands/rclone_sha1sum.md +++ b/docs/content/commands/rclone_sha1sum.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone sha1sum" slug: rclone_sha1sum url: /commands/rclone_sha1sum/ @@ -49,6 +49,7 @@ rclone sha1sum remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -60,6 +61,7 @@ rclone sha1sum remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -69,6 +71,7 @@ rclone sha1sum remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -83,7 +86,7 @@ rclone sha1sum remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -106,6 +109,6 @@ rclone sha1sum remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md index a13788993..f78d22445 100644 --- a/docs/content/commands/rclone_size.md +++ b/docs/content/commands/rclone_size.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone size" slug: rclone_size url: /commands/rclone_size/ @@ -46,6 +46,7 @@ rclone size remote:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone size remote:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone size remote:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone size remote:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone size remote:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index c29e93284..b989963ed 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone sync" slug: rclone_sync url: /commands/rclone_sync/ @@ -65,6 +65,7 @@ rclone sync source:path dest:path --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -76,6 +77,7 @@ rclone sync source:path dest:path --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -85,6 +87,7 @@ rclone sync source:path dest:path -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -99,7 +102,7 @@ rclone sync source:path dest:path --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -122,6 +125,6 @@ rclone sync source:path dest:path ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md index c4a04cc83..2cd985729 100644 --- a/docs/content/commands/rclone_version.md +++ b/docs/content/commands/rclone_version.md @@ -1,5 +1,5 @@ --- -date: 2017-03-18T11:19:45Z +date: 2017-06-15T20:06:09+01:00 title: "rclone version" slug: rclone_version url: /commands/rclone_version/ @@ -46,6 +46,7 @@ rclone version --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me --drive-skip-gdocs Skip google documents in all listings. --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --drive-use-trash Send files to the trash instead of deleting permanently. @@ -57,6 +58,7 @@ rclone version --dump-headers Dump HTTP headers - may contain sensitive info --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. --files-from stringArray Read list of source-file names from file -f, --filter stringArray Add a file-filtering rule --filter-from stringArray Read filtering patterns from a file @@ -66,6 +68,7 @@ rclone version -I, --ignore-times Don't skip files that match size and time - transfer all files --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --log-file string Log everything to this file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO") --low-level-retries int Number of low level retries to do. (default 10) @@ -80,7 +83,7 @@ rclone version --no-gzip-encoding Don't set Accept-Encoding: gzip. --no-traverse Don't traverse destination file system on copy. --no-update-modtime Don't update destination mod-time if files identical. - --old-sync-method Temporary flag to select old sync method + --old-sync-method Deprecated - use --fast-list instead -x, --one-file-system Don't cross filesystem boundaries. --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) @@ -103,6 +106,6 @@ rclone version ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9 -###### Auto generated by spf13/cobra on 18-Mar-2017 +###### Auto generated by spf13/cobra on 15-Jun-2017 diff --git a/rclone.1 b/rclone.1 index ad2c21c02..ea99aaf4c 100644 --- a/rclone.1 +++ b/rclone.1 @@ -1,11 +1,11 @@ .\"t -.\" Automatically generated by Pandoc 1.16.0.2 +.\" Automatically generated by Pandoc 1.17.2 .\" -.TH "rclone" "1" "Mar 18, 2017" "User Manual" "" +.TH "rclone" "1" "Jun 15, 2017" "User Manual" "" .hy .SH Rclone .PP -[IMAGE: Logo (http://rclone.org/img/rclone-120x120.png)] (http://rclone.org/) +[IMAGE: Logo (https://rclone.org/img/rclone-120x120.png)] (https://rclone.org/) .PP Rclone is a command line program to sync files and directories to and from @@ -22,7 +22,7 @@ Google Cloud Storage .IP \[bu] 2 Amazon Drive .IP \[bu] 2 -Microsoft One Drive +Microsoft OneDrive .IP \[bu] 2 Hubic .IP \[bu] 2 @@ -32,6 +32,8 @@ Yandex Disk .IP \[bu] 2 SFTP .IP \[bu] 2 +FTP +.IP \[bu] 2 The local filesystem .PP Features @@ -42,49 +44,49 @@ Timestamps preserved on files .IP \[bu] 2 Partial syncs supported on a whole file basis .IP \[bu] 2 -Copy (http://rclone.org/commands/rclone_copy/) mode to just copy +Copy (https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files .IP \[bu] 2 -Sync (http://rclone.org/commands/rclone_sync/) (one way) mode to make a +Sync (https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical .IP \[bu] 2 -Check (http://rclone.org/commands/rclone_check/) mode to check for file +Check (https://rclone.org/commands/rclone_check/) mode to check for file hash equality .IP \[bu] 2 Can sync to and from network, eg two different cloud accounts .IP \[bu] 2 -Optional encryption (Crypt (http://rclone.org/crypt/)) +Optional encryption (Crypt (https://rclone.org/crypt/)) .IP \[bu] 2 Optional FUSE mount (rclone -mount (http://rclone.org/commands/rclone_mount/)) +mount (https://rclone.org/commands/rclone_mount/)) .PP Links .IP \[bu] 2 -Home page (http://rclone.org/) +Home page (https://rclone.org/) .IP \[bu] 2 Github project page for source and bug -tracker (http://github.com/ncw/rclone) +tracker (https://github.com/ncw/rclone) .IP \[bu] 2 Rclone Forum (https://forum.rclone.org) .IP \[bu] 2 Google+ page .IP \[bu] 2 -Downloads (http://rclone.org/downloads/) +Downloads (https://rclone.org/downloads/) .SH Install .PP Rclone is a Go program and comes as a single binary file. .SS Quickstart .IP \[bu] 2 -Download (http://rclone.org/downloads/) the relevant binary. +Download (https://rclone.org/downloads/) the relevant binary. .IP \[bu] 2 Unpack and the \f[C]rclone\f[] binary. .IP \[bu] 2 Run \f[C]rclone\ config\f[] to setup. -See rclone config docs (http://rclone.org/docs/) for more details. +See rclone config docs (https://rclone.org/docs/) for more details. .PP See below for some expanded Linux / macOS instructions. .PP -See the Usage section (http://rclone.org/docs/) of the docs for how to +See the Usage section (https://rclone.org/docs/) of the docs for how to use rclone, or run \f[C]rclone\ \-h\f[]. .SS Linux installation from precompiled binary .PP @@ -92,7 +94,7 @@ Fetch and unpack .IP .nf \f[C] -curl\ \-O\ http://downloads.rclone.org/rclone\-current\-linux\-amd64.zip +curl\ \-O\ https://downloads.rclone.org/rclone\-current\-linux\-amd64.zip unzip\ rclone\-current\-linux\-amd64.zip cd\ rclone\-*\-linux\-amd64 \f[] @@ -119,7 +121,7 @@ sudo\ mandb\ .fi .PP Run \f[C]rclone\ config\f[] to setup. -See rclone config docs (http://rclone.org/docs/) for more details. +See rclone config docs (https://rclone.org/docs/) for more details. .IP .nf \f[C] @@ -132,7 +134,7 @@ Download the latest version of rclone. .IP .nf \f[C] -cd\ &&\ curl\ \-O\ http://downloads.rclone.org/rclone\-current\-osx\-amd64.zip +cd\ &&\ curl\ \-O\ https://downloads.rclone.org/rclone\-current\-osx\-amd64.zip \f[] .fi .PP @@ -162,7 +164,7 @@ cd\ ..\ &&\ rm\ \-rf\ rclone\-*\-osx\-amd64\ rclone\-current\-osx\-amd64.zip .fi .PP Run \f[C]rclone\ config\f[] to setup. -See rclone config docs (http://rclone.org/docs/) for more details. +See rclone config docs (https://rclone.org/docs/) for more details. .IP .nf \f[C] @@ -171,7 +173,7 @@ rclone\ config .fi .SS Install from source .PP -Make sure you have at least Go (https://golang.org/) 1.5 installed. +Make sure you have at least Go (https://golang.org/) 1.6 installed. Make sure your \f[C]GOPATH\f[] is set, then: .IP .nf @@ -216,7 +218,7 @@ install Snapd on your distro using the instructions below sudo snap install rclone \-\-classic .IP \[bu] 2 Run \f[C]rclone\ config\f[] to setup. -See rclone config docs (http://rclone.org/docs/) for more details. +See rclone config docs (https://rclone.org/docs/) for more details. .PP See below for how to install snapd if it isn\[aq]t already installed .SS Arch @@ -280,7 +282,7 @@ layer (https://github.com/morphis/meta-snappy/blob/master/README.md). .IP .nf \f[C] -sudo\ zypper\ addrepo\ http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/\ snappy +sudo\ zypper\ addrepo\ https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/\ snappy sudo\ zypper\ install\ snapd \f[] .fi @@ -306,32 +308,34 @@ rclone\ config .PP See the following for detailed instructions for .IP \[bu] 2 -Google drive (http://rclone.org/drive/) +Google drive (https://rclone.org/drive/) .IP \[bu] 2 -Amazon S3 (http://rclone.org/s3/) +Amazon S3 (https://rclone.org/s3/) .IP \[bu] 2 Swift / Rackspace Cloudfiles / Memset -Memstore (http://rclone.org/swift/) +Memstore (https://rclone.org/swift/) .IP \[bu] 2 -Dropbox (http://rclone.org/dropbox/) +Dropbox (https://rclone.org/dropbox/) .IP \[bu] 2 -Google Cloud Storage (http://rclone.org/googlecloudstorage/) +Google Cloud Storage (https://rclone.org/googlecloudstorage/) .IP \[bu] 2 -Local filesystem (http://rclone.org/local/) +Local filesystem (https://rclone.org/local/) .IP \[bu] 2 -Amazon Drive (http://rclone.org/amazonclouddrive/) +Amazon Drive (https://rclone.org/amazonclouddrive/) .IP \[bu] 2 -Backblaze B2 (http://rclone.org/b2/) +Backblaze B2 (https://rclone.org/b2/) .IP \[bu] 2 -Hubic (http://rclone.org/hubic/) +Hubic (https://rclone.org/hubic/) .IP \[bu] 2 -Microsoft One Drive (http://rclone.org/onedrive/) +Microsoft OneDrive (https://rclone.org/onedrive/) .IP \[bu] 2 -Yandex Disk (http://rclone.org/yandex/) +Yandex Disk (https://rclone.org/yandex/) .IP \[bu] 2 -SFTP (http://rclone.org/sftp/) +SFTP (https://rclone.org/sftp/) .IP \[bu] 2 -Crypt (http://rclone.org/crypt/) \- to encrypt other remotes +FTP (https://rclone.org/ftp/) +.IP \[bu] 2 +Crypt (https://rclone.org/crypt/) \- to encrypt other remotes .SS Usage .PP Rclone syncs a directory tree from one storage system to another. @@ -595,7 +599,7 @@ really want to check all the data. .IP .nf \f[C] -rclone\ check\ source:path\ dest:path +rclone\ check\ source:path\ dest:path\ [flags] \f[] .fi .SS Options @@ -821,7 +825,7 @@ rclone\ dedupe\ rename\ "drive:Google\ Photos" .IP .nf \f[C] -rclone\ dedupe\ [mode]\ remote:path +rclone\ dedupe\ [mode]\ remote:path\ [flags] \f[] .fi .SS Options @@ -884,7 +888,7 @@ Note that if offset is negative it will count from the end, so .IP .nf \f[C] -rclone\ cat\ remote:path +rclone\ cat\ remote:path\ [flags] \f[] .fi .SS Options @@ -983,6 +987,21 @@ After it has run it will log the status of the encryptedremote:. rclone\ cryptcheck\ remote:path\ cryptedremote:path \f[] .fi +.SS rclone dbhashsum +.PP +Produces a Dropbbox hash file for all the objects in the path. +.SS Synopsis +.PP +Produces a Dropbox hash file for all the objects in the path. +The hashes are calculated according to Dropbox content hash +rules (https://www.dropbox.com/developers/reference/content-hash). +The output is in the same format as md5sum and sha1sum. +.IP +.nf +\f[C] +rclone\ dbhashsum\ remote:path +\f[] +.fi .SS rclone genautocomplete .PP Output bash completion script for rclone. @@ -1027,7 +1046,14 @@ website. .IP .nf \f[C] -rclone\ gendocs\ output_directory +rclone\ gendocs\ output_directory\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ gendocs \f[] .fi .SS rclone listremotes @@ -1041,7 +1067,7 @@ When uses with the \-l flag it lists the types too. .IP .nf \f[C] -rclone\ listremotes +rclone\ listremotes\ [flags] \f[] .fi .SS Options @@ -1051,6 +1077,46 @@ rclone\ listremotes \ \ \-l,\ \-\-long\ \ \ Show\ the\ type\ as\ well\ as\ names. \f[] .fi +.SS rclone lsjson +.PP +List directories and objects in the path in JSON format. +.SS Synopsis +.PP +List directories and objects in the path in JSON format. +.PP +The output is an array of Items, where each Item looks like this +.PP +{ "Hashes" : { "SHA\-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", +"MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : +"ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, +"IsDir" : false, "ModTime" : "2017\-05\-31T16:15:57.034468261+01:00", +"Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6 +} +.PP +If \-\-hash is not specified the the Hashes property won\[aq]t be +emitted. +.PP +If \-\-no\-modtime is specified then ModTime will be blank. +.PP +The time is in RFC3339 format with nanosecond precision. +.PP +The whole output can be processed as a JSON blob, or alternatively it +can be processed line by line as each item is written one to a line. +.IP +.nf +\f[C] +rclone\ lsjson\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \ \ \ \ \-\-hash\ \ \ \ \ \ \ \ \ Include\ hashes\ in\ the\ output\ (may\ take\ longer). +\ \ \ \ \ \ \-\-no\-modtime\ \ \ Don\[aq]t\ read\ the\ modification\ time\ (can\ speed\ things\ up). +\ \ \-R,\ \-\-recursive\ \ \ \ Recurse\ into\ the\ listing. +\f[] +.fi .SS rclone mount .PP Mount the remote as a mountpoint. @@ -1065,35 +1131,26 @@ This is \f[B]EXPERIMENTAL\f[] \- use with care. First set up your remote using \f[C]rclone\ config\f[]. Check it works with \f[C]rclone\ ls\f[] etc. .PP -Start the mount like this (note the & on the end to put rclone in the -background). +Start the mount like this .IP .nf \f[C] -rclone\ mount\ remote:path/to/files\ /path/to/local/mount\ & +rclone\ mount\ remote:path/to/files\ /path/to/local/mount \f[] .fi .PP -Stop the mount with +When the program ends, either via Ctrl+C or receiving a SIGINT or +SIGTERM signal, the mount is automatically stopped. +.PP +The umount operation can fail, for example when the mountpoint is busy. +When that happens, it is the user\[aq]s responsibility to stop the mount +manually with .IP .nf \f[C] +#\ Linux fusermount\ \-u\ /path/to/local/mount -\f[] -.fi -.PP -Or if that fails try -.IP -.nf -\f[C] -fusermount\ \-z\ \-u\ /path/to/local/mount -\f[] -.fi -.PP -Or with OS X -.IP -.nf -\f[C] +#\ OS\ X umount\ /path/to/local/mount \f[] .fi @@ -1126,6 +1183,26 @@ won\[aq]t do that, so will be less reliable than the rclone command. .PP Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. +.SS Directory Cache +.PP +Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a +directory should be considered up to date and not refreshed from the +backend. +Changes made locally in the mount may appear immediately or invalidate +the cache. +However, changes done on the remote will only be picked up once the +cache expires. +.PP +Alternatively, you can send a \f[C]SIGHUP\f[] signal to rclone for it to +flush all directory caches, regardless of how old they are. +Assuming only one rclone instance is running, you can reset the cache +like this: +.IP +.nf +\f[C] +kill\ \-SIGHUP\ $(pidof\ rclone) +\f[] +.fi .SS Bugs .IP \[bu] 2 All the remotes should work for read, but some may not for write @@ -1137,17 +1214,10 @@ maybe should pass in size as \-1 to mean work it out .IP \[bu] 2 Or put in an an upload cache to cache the files on disk first .RE -.SS TODO -.IP \[bu] 2 -Check hashes on upload/download -.IP \[bu] 2 -Preserve timestamps -.IP \[bu] 2 -Move directories .IP .nf \f[C] -rclone\ mount\ remote:path\ /path/to/mountpoint +rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags] \f[] .fi .SS Options @@ -1162,8 +1232,10 @@ rclone\ mount\ remote:path\ /path/to/mountpoint \ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) \ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) \ \ \ \ \ \ \-\-max\-read\-ahead\ int\ \ \ \ \ \ \ \ The\ number\ of\ bytes\ that\ can\ be\ prefetched\ for\ sequential\ reads.\ (default\ 128k) -\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read\ the\ modification\ time\ (can\ speed\ things\ up). +\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. +\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). \ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files. +\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s) \ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only. \ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502) \ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2) @@ -1217,6 +1289,46 @@ src will be deleted on successful transfer. rclone\ moveto\ source:path\ dest:path \f[] .fi +.SS rclone ncdu +.PP +Explore a remote with a text based user interface. +.SS Synopsis +.PP +This displays a text based user interface allowing the navigation of a +remote. +It is most useful for answering the question \- "What is using all my +disk space?". +.PP +To make the user interface it first scans the entire remote given and +builds an in memory representation. +rclone ncdu can be used during this scanning phase and you will see it +building up the directory structure as it goes along. +.PP +Here are the keys \- press \[aq]?\[aq] to toggle the help on and off +.IP +.nf +\f[C] +\ ↑,↓\ or\ k,j\ to\ Move +\ →,l\ to\ enter +\ ←,h\ to\ return +\ c\ toggle\ counts +\ g\ toggle\ graph +\ n,s,C\ sort\ by\ name,size,count +\ ?\ to\ toggle\ help\ on\ and\ off +\ q/ESC/c\-C\ to\ quit +\f[] +.fi +.PP +This an homage to the ncdu tool (https://dev.yorhel.nl/ncdu) but for +rclone remotes. +It is missing lots of features at the moment, most importantly deleting +files, but is useful as it stands. +.IP +.nf +\f[C] +rclone\ ncdu\ remote:path +\f[] +.fi .SS rclone obscure .PP Obscure password for use in the rclone.conf @@ -1453,7 +1565,7 @@ will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited. .PP Bandwidth limits only apply to the data transfer. -The don\[aq]t apply to the bandwith of the directory listings etc. +They don\[aq]t apply to the bandwidth of the directory listings etc. .PP Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s \- to convert divide by 8. @@ -1488,7 +1600,7 @@ and a more accurate sync is desired than just checking the file size. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in -the overview section (http://rclone.org/overview/). +the overview section (https://rclone.org/overview/). .PP Eg \f[C]rclone\ \-\-checksum\ sync\ s3:/bucket\ swift:/bucket\f[] would run much quicker than without the \f[C]\-\-checksum\f[] flag. @@ -1774,15 +1886,52 @@ This is the fastest option and uses the least memory. .PP Specifying \f[C]\-\-delete\-after\f[] (the default value) will delay deletion of files until all new/updated files have been successfully -transfered. +transferred. The files to be deleted are collected in the copy pass then deleted -after the copy pass has completed sucessfully. +after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message \f[C]not\ deleting\ files\ as\ there\ were\ IO\ errors\f[]. +.SS \-\-fast\-list +.PP +When doing anything which involves a directory listing (eg +\f[C]sync\f[], \f[C]copy\f[], \f[C]ls\f[] \- in fact nearly every +command), rclone normally lists a directory and processes it before +using more directory lists to process any subdirectories. +This can be parallelised and works very quickly using the least amount +of memory. +.PP +However some remotes have a way of listing all files beneath a directory +in one (or a small number) of transactions. +These tend to be the bucket based remotes (eg s3, b2, gcs, swift, +hubic). +.PP +If you use the \f[C]\-\-fast\-list\f[] flag then rclone will use this +method for listing directories. +This will have the following consequences for the listing: +.IP \[bu] 2 +It \f[B]will\f[] use fewer transactions (important if you pay for them) +.IP \[bu] 2 +It \f[B]will\f[] use more memory. +Rclone has to load the whole listing into memory. +.IP \[bu] 2 +It \f[I]may\f[] be faster because it uses fewer transactions +.IP \[bu] 2 +It \f[I]may\f[] be slower because it can\[aq]t be parallelized +.PP +rclone should always give identical results with and without +\f[C]\-\-fast\-list\f[]. +.PP +If you pay for transactions and can fit your entire sync listing into +memory then \f[C]\-\-fast\-list\f[] is recommended. +If you have a very big sync to do then don\[aq]t use +\f[C]\-\-fast\-list\f[] otherwise you will run out of memory. +.PP +If you use \f[C]\-\-fast\-list\f[] on a remote which doesn\[aq]t support +it, then rclone will just ignore it. .SS \-\-timeout=TIME .PP This sets the IO idle timeout. @@ -1810,7 +1959,7 @@ be updated if the sizes are different. .PP On remotes which don\[aq]t support mod time directly the time checked will be the uploaded time. -This means that if uploading to one of these remoes, rclone will skip +This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. .PP @@ -1919,7 +2068,7 @@ export\ RCLONE_CONFIG_PASS .PP Then source the file when you want to use it. From the shell you would do \f[C]source\ set\-rclone\-password\f[]. -It will then ask you for the password and set it in the envonment +It will then ask you for the password and set it in the environment variable. .PP If you are running rclone inside a script, you might want to disable @@ -1992,7 +2141,7 @@ If you are only copying a small number of files and/or have a large number of files on the destination then \f[C]\-\-no\-traverse\f[] will stop rclone listing the destination and save time. .PP -However if you are copying a large number of files, escpecially if you +However if you are copying a large number of files, especially if you are doing a copy where lots of the files haven\[aq]t changed and won\[aq]t need copying then you shouldn\[aq]t use \f[C]\-\-no\-traverse\f[]. @@ -2031,7 +2180,7 @@ For the filtering options .IP \[bu] 2 \f[C]\-\-dump\-filters\f[] .PP -See the filtering section (http://rclone.org/filtering/). +See the filtering section (https://rclone.org/filtering/). .SS Logging .PP rclone has 4 levels of logging, \f[C]Error\f[], \f[C]Notice\f[], @@ -2080,7 +2229,7 @@ When rclone is running it will accumulate errors as it goes along, and only exit with an non\-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message -(visibile with \f[C]\-q\f[]) showing the message and which file caused +(visible with \f[C]\-q\f[]) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the @@ -2844,15 +2993,15 @@ T} T{ Dropbox T}@T{ -\- +DBHASH † T}@T{ -No +Yes T}@T{ Yes T}@T{ No T}@T{ -R +\- T} T{ Google Cloud Storage @@ -2881,7 +3030,7 @@ T}@T{ R T} T{ -Microsoft One Drive +Microsoft OneDrive T}@T{ SHA1 T}@T{ @@ -2946,6 +3095,19 @@ T}@T{ \- T} T{ +FTP +T}@T{ +\- +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T}@T{ +\- +T} +T{ The local filesystem T}@T{ All @@ -2971,6 +3133,10 @@ the \f[C]check\f[] command. .PP To use the checksum checks between filesystems they must support a common hash type. +.PP +† Note that Dropbox supports its own custom +hash (https://www.dropbox.com/developers/reference/content-hash). +This is an SHA256 sum of all the 4MB block SHA256s. .SS ModTime .PP The cloud storage system supports setting modification times on objects. @@ -3040,7 +3206,7 @@ more efficient. .PP .TS tab(@); -l c c c c c. +l c c c c c c. T{ Name T}@T{ @@ -3053,6 +3219,8 @@ T}@T{ DirMove T}@T{ CleanUp +T}@T{ +ListR T} _ T{ @@ -3067,6 +3235,8 @@ T}@T{ Yes T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) +T}@T{ +No T} T{ Amazon S3 @@ -3080,6 +3250,8 @@ T}@T{ No T}@T{ No +T}@T{ +Yes T} T{ Openstack Swift @@ -3093,6 +3265,8 @@ T}@T{ No T}@T{ No +T}@T{ +Yes T} T{ Dropbox @@ -3106,6 +3280,8 @@ T}@T{ Yes T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) +T}@T{ +No T} T{ Google Cloud Storage @@ -3119,6 +3295,8 @@ T}@T{ No T}@T{ No +T}@T{ +Yes T} T{ Amazon Drive @@ -3132,9 +3310,11 @@ T}@T{ Yes T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) +T}@T{ +No T} T{ -Microsoft One Drive +Microsoft OneDrive T}@T{ Yes T}@T{ @@ -3145,6 +3325,8 @@ T}@T{ No #197 (https://github.com/ncw/rclone/issues/197) T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) +T}@T{ +No T} T{ Hubic @@ -3158,6 +3340,8 @@ T}@T{ No T}@T{ No +T}@T{ +Yes T} T{ Backblaze B2 @@ -3171,6 +3355,8 @@ T}@T{ No T}@T{ Yes +T}@T{ +Yes T} T{ Yandex Disk @@ -3184,6 +3370,8 @@ T}@T{ No T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) +T}@T{ +Yes T} T{ SFTP @@ -3197,6 +3385,23 @@ T}@T{ Yes T}@T{ No +T}@T{ +No +T} +T{ +FTP +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No T} T{ The local filesystem @@ -3210,6 +3415,8 @@ T}@T{ Yes T}@T{ No +T}@T{ +No T} .TE .SS Purge @@ -3255,6 +3462,12 @@ This is used for emptying the trash for a remote by .PP If the server can\[aq]t do \f[C]CleanUp\f[] then \f[C]rclone\ cleanup\f[] will return an error. +.SS ListR +.PP +The remote supports a recursive list to list all the contents beneath a +directory quickly. +This enables the \f[C]\-\-fast\-list\f[] flag to work. +See the rclone docs (/docs/#fast-list) for more details. .SS Google Drive .PP Paths are specified as \f[C]drive:path\f[] @@ -3279,10 +3492,13 @@ This will guide you through an interactive setup process: .IP .nf \f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one n)\ New\ remote -d)\ Delete\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password q)\ Quit\ config -e/n/d/q>\ n +n/r/c/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value @@ -3296,27 +3512,29 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "dropbox" \ 5\ /\ Encrypt/Decrypt\ a\ remote \ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ 6\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) \ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive +\ 8\ /\ Google\ Drive \ \ \ \\\ "drive" -\ 8\ /\ Hubic +\ 9\ /\ Hubic \ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk +10\ /\ Local\ Disk \ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive +11\ /\ Microsoft\ OneDrive \ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) \ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection +13\ /\ SSH/SFTP\ Connection \ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk +14\ /\ Yandex\ Disk \ \ \ \\\ "yandex" -Storage>\ 7 +Storage>\ 8 Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally. -client_id> +client_id>\ Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally. -client_secret> +client_secret>\ Remote\ config Use\ auto\ config? \ *\ Say\ Y\ if\ not\ sure @@ -3328,10 +3546,14 @@ If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ lin Log\ in\ and\ authorize\ rclone\ for\ access Waiting\ for\ code... Got\ code +Configure\ this\ as\ a\ team\ drive? +y)\ Yes +n)\ No +y/n>\ n \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [remote] -client_id\ = -client_secret\ = +client_id\ =\ +client_secret\ =\ token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null} \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK @@ -3374,6 +3596,46 @@ To copy a local directory to a drive directory called backup rclone\ copy\ /home/source\ remote:backup \f[] .fi +.SS Team drives +.PP +If you want to configure the remote to point to a Google Team Drive then +answer \f[C]y\f[] to the question +\f[C]Configure\ this\ as\ a\ team\ drive?\f[]. +.PP +This will fetch the list of Team Drives from google and allow you to +configure which one you want to use. +You can also type in a team drive ID if you prefer. +.PP +For example: +.IP +.nf +\f[C] +Configure\ this\ as\ a\ team\ drive? +y)\ Yes +n)\ No +y/n>\ y +Fetching\ team\ drive\ list... +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Rclone\ Test +\ \ \ \\\ "xxxxxxxxxxxxxxxxxxxx" +\ 2\ /\ Rclone\ Test\ 2 +\ \ \ \\\ "yyyyyyyyyyyyyyyyyyyy" +\ 3\ /\ Rclone\ Test\ 3 +\ \ \ \\\ "zzzzzzzzzzzzzzzzzzzz" +Enter\ a\ Team\ Drive\ ID>\ 1 +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ =\ +client_secret\ =\ +token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null} +team_drive\ =\ xxxxxxxxxxxxxxxxxxxx +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi .SS Modified time .PP Google drive stores modification times accurate to 1 ms. @@ -3788,7 +4050,7 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "sa\-east\-1" location_constraint>\ 1 Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3. -For\ more\ info\ visit\ http://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl +For\ more\ info\ visit\ https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default). \ \ \ \\\ "private" @@ -3878,6 +4140,11 @@ excess files in the bucket. rclone\ sync\ /home/local/directory\ remote:bucket \f[] .fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. .SS Modified time .PP The modified time is stored as metadata on the object as @@ -3926,6 +4193,63 @@ Running \f[C]rclone\f[] on an EC2 instance with an IAM role If none of these option actually end up providing \f[C]rclone\f[] with AWS credentials then S3 interaction will be non\-authenticated (see below). +.SS S3 Permissions +.PP +When using the \f[C]sync\f[] subcommand of \f[C]rclone\f[] the following +minimum permissions are required to be available on the bucket being +written to: +.IP \[bu] 2 +\f[C]ListBucket\f[] +.IP \[bu] 2 +\f[C]DeleteObject\f[] +.IP \[bu] 2 +\f[C]GetObject\f[] +.IP \[bu] 2 +\f[C]PutObject\f[] +.IP \[bu] 2 +\f[C]PutObjectACL\f[] +.PP +Example policy: +.IP +.nf +\f[C] +{ +\ \ \ \ "Version":\ "2012\-10\-17", +\ \ \ \ "Statement":\ [ +\ \ \ \ \ \ \ \ { +\ \ \ \ \ \ \ \ \ \ \ \ "Effect":\ "Allow", +\ \ \ \ \ \ \ \ \ \ \ \ "Principal":\ { +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "AWS":\ "arn:aws:iam::USER_SID:user/USER_NAME" +\ \ \ \ \ \ \ \ \ \ \ \ }, +\ \ \ \ \ \ \ \ \ \ \ \ "Action":\ [ +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "s3:ListBucket", +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "s3:DeleteObject", +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "s3:GetObject", +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "s3:PutObject", +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "s3:PutObjectAcl" +\ \ \ \ \ \ \ \ \ \ \ \ ], +\ \ \ \ \ \ \ \ \ \ \ \ "Resource":\ [ +\ \ \ \ \ \ \ \ \ \ \ \ \ \ "arn:aws:s3:::BUCKET_NAME/*", +\ \ \ \ \ \ \ \ \ \ \ \ \ \ "arn:aws:s3:::BUCKET_NAME" +\ \ \ \ \ \ \ \ \ \ \ \ ] +\ \ \ \ \ \ \ \ } +\ \ \ \ ] +} +\f[] +.fi +.PP +Notes on above: +.IP "1." 3 +This is a policy that can be used when creating bucket. +It assumes that \f[C]USER_NAME\f[] has been created. +.IP "2." 3 +The Resource entry must include both resource ARNs, as one implies the +bucket and the other implies the bucket\[aq]s objects. +.PP +For reference, here\[aq]s an Ansible +script (https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) +that will generate one or more buckets that will work with +\f[C]rclone\ sync\f[]. .SS Specific options .PP Here are the command line options specific to this cloud storage system. @@ -3934,7 +4258,7 @@ Here are the command line options specific to this cloud storage system. Canned ACL used when creating buckets and/or storing objects in S3. .PP For more info visit the canned ACL -docs (http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). +docs (https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). .SS \-\-s3\-storage\-class=STRING .PP Storage class to upload new objects with. @@ -4114,12 +4438,12 @@ rclone\ \-\-size\-only\ copy\ /path/to/files\ minio:bucket .SS Swift .PP Swift refers to Openstack Object -Storage (http://www.openstack.org/software/openstack-storage/). +Storage (https://www.openstack.org/software/openstack-storage/). Commercial implementations of that being: .IP \[bu] 2 -Rackspace Cloud Files (http://www.rackspace.com/cloud/files/) +Rackspace Cloud Files (https://www.rackspace.com/cloud/files/) .IP \[bu] 2 -Memset Memstore (http://www.memset.com/cloud/storage/) +Memset Memstore (https://www.memset.com/cloud/storage/) .PP Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg @@ -4293,6 +4617,11 @@ tenant\ =\ $OS_TENANT_NAME .PP Note that you may (or may not) need to set \f[C]region\f[] too \- try without first. +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. .SS Specific options .PP Here are the command line options specific to this cloud storage system. @@ -4434,22 +4763,20 @@ To copy a local directory to a dropbox directory called backup rclone\ copy\ /home/source\ remote:backup \f[] .fi -.SS Modified time and MD5SUMs +.SS Modified time and Hashes .PP -Dropbox doesn\[aq]t provide the ability to set modification times in the -V1 public API, so rclone can\[aq]t support modified time with Dropbox. +Dropbox supports modified times, but the only way to set a modification +time is to re\-upload the file. .PP -This may change in the future \- see these issues for details: -.IP \[bu] 2 -Dropbox V2 API (https://github.com/ncw/rclone/issues/349) -.IP \[bu] 2 -Allow syncs for remotes that can\[aq]t set modtime on existing -objects (https://github.com/ncw/rclone/issues/348) +This means that if you uploaded your data with an older version of +rclone which didn\[aq]t support the v2 API and modified times, rclone +will decide to upload all your old data to fix the modification times. +If you don\[aq]t want this to happen use \f[C]\-\-size\-only\f[] or +\f[C]\-\-checksum\f[] flag to stop it. .PP -Dropbox doesn\[aq]t return any sort of checksum (MD5 or SHA1). -.PP -Together that means that syncs to dropbox will effectively have the -\f[C]\-\-size\-only\f[] flag set. +Dropbox supports its own hash +type (https://www.dropbox.com/developers/reference/content-hash) which +is checked for all transfers. .SS Specific options .PP Here are the command line options specific to this cloud storage system. @@ -4475,7 +4802,7 @@ upload one of those file names, but the sync won\[aq]t fail. If you have more than 10,000 files in a directory then \f[C]rclone\ purge\ dropbox:dir\f[] will return the error \f[C]Failed\ to\ purge:\ There\ are\ too\ many\ files\ involved\ in\ this\ operation\f[]. -As a work\-around do an \f[C]rclone\ delete\ dropbix:dir\f[] followed by +As a work\-around do an \f[C]rclone\ delete\ dropbox:dir\f[] followed by an \f[C]rclone\ rmdir\ dropbox:dir\f[]. .SS Google Cloud Storage .PP @@ -4665,6 +4992,11 @@ These credentials are what rclone will use for authentication. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the \f[C]service_account_file\f[] prompt and rclone won\[aq]t use the browser based authentication flow. +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. .SS Modified time .PP Google google cloud storage stores md5sums natively and rclone stores @@ -4681,6 +5013,27 @@ The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. \f[C]rclone\ config\f[] walks you through it. .PP +The configuration process for Amazon Drive may involve using an oauth +proxy (https://github.com/ncw/oauthproxy). +This is used to keep the Amazon credentials out of the source code. +The proxy runs in Google\[aq]s very secure App Engine environment and +doesn\[aq]t store any credentials which pass through it. +.PP +\f[B]NB\f[] rclone doesn\[aq]t not currently have its own Amazon Drive +credentials (see the +forum (https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/) +for why) so you will either need to have your own \f[C]client_id\f[] and +\f[C]client_secret\f[] with Amazon Drive, or use a a third party ouath +proxy in which case you will need to enter \f[C]client_id\f[], +\f[C]client_secret\f[], \f[C]auth_url\f[] and \f[C]token_url\f[]. +.PP +Note also if you are not using Amazon\[aq]s \f[C]auth_url\f[] and +\f[C]token_url\f[], (ie you filled in something for those) then if +setting up on a remote machine you can only use the copying the config +method of +configuration (https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) +\- \f[C]rclone\ authorize\f[] will not work. +.PP Here is an example of how to make a remote called \f[C]remote\f[]. First run: .IP @@ -4694,10 +5047,13 @@ This will guide you through an interactive setup process: .IP .nf \f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one n)\ New\ remote -d)\ Delete\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password q)\ Quit\ config -e/n/d/q>\ n +n/r/c/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value @@ -4711,28 +5067,35 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "dropbox" \ 5\ /\ Encrypt/Decrypt\ a\ remote \ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ 6\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) \ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive +\ 8\ /\ Google\ Drive \ \ \ \\\ "drive" -\ 8\ /\ Hubic +\ 9\ /\ Hubic \ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk +10\ /\ Local\ Disk \ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive +11\ /\ Microsoft\ OneDrive \ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) \ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection +13\ /\ SSH/SFTP\ Connection \ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk +14\ /\ Yandex\ Disk \ \ \ \\\ "yandex" Storage>\ 1 -Amazon\ Application\ Client\ Id\ \-\ leave\ blank\ normally. -client_id> -Amazon\ Application\ Client\ Secret\ \-\ leave\ blank\ normally. -client_secret> +Amazon\ Application\ Client\ Id\ \-\ required. +client_id>\ your\ client\ ID\ goes\ here +Amazon\ Application\ Client\ Secret\ \-\ required. +client_secret>\ your\ client\ secret\ goes\ here +Auth\ server\ URL\ \-\ leave\ blank\ to\ use\ Amazon\[aq]s. +auth_url>\ Optional\ auth\ URL +Token\ server\ url\ \-\ leave\ blank\ to\ use\ Amazon\[aq]s. +token_url>\ Optional\ token\ URL Remote\ config +Make\ sure\ your\ Redirect\ URL\ is\ set\ to\ "http://127.0.0.1:53682/"\ in\ your\ custom\ config. Use\ auto\ config? \ *\ Say\ Y\ if\ not\ sure \ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine @@ -4745,8 +5108,10 @@ Waiting\ for\ code... Got\ code \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [remote] -client_id\ = -client_secret\ = +client_id\ =\ your\ client\ ID\ goes\ here +client_secret\ =\ your\ client\ secret\ goes\ here +auth_url\ =\ Optional\ auth\ URL +token_url\ =\ Optional\ token\ URL token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015\-09\-06T16:07:39.658438471+01:00"} \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK @@ -4756,7 +5121,7 @@ y/e/d>\ y \f[] .fi .PP -See the remote setup docs (http://rclone.org/remote_setup/) for how to +See the remote setup docs (https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. .PP Note that rclone runs a webserver on your local machine to collect the @@ -4865,7 +5230,7 @@ will fail. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail. .PP -Unfortunatly there is no way for rclone to see that this failure is +Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use \f[C]\-\-max\-size\ 50000M\f[] option to @@ -4958,7 +5323,7 @@ y/e/d>\ y \f[] .fi .PP -See the remote setup docs (http://rclone.org/remote_setup/) for how to +See the remote setup docs (https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. .PP Note that rclone runs a webserver on your local machine to collect the @@ -5124,7 +5489,7 @@ y/e/d>\ y \f[] .fi .PP -See the remote setup docs (http://rclone.org/remote_setup/) for how to +See the remote setup docs (https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. .PP Note that rclone runs a webserver on your local machine to collect the @@ -5169,6 +5534,11 @@ directory rclone\ copy\ /home/source\ remote:default/backup \f[] .fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. .SS Modified time .PP The modified time is stored as metadata on the object as @@ -5300,6 +5670,11 @@ excess files in the bucket. rclone\ sync\ /home/local/directory\ remote:bucket \f[] .fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. .SS Modified time .PP The modified time is stored as metadata on the object as @@ -5524,7 +5899,7 @@ are permitted, so you can\[aq]t upload files or delete them. .SS Yandex Disk .PP Yandex Disk (https://disk.yandex.com) is a cloud storage solution -created by Yandex (http://yandex.com). +created by Yandex (https://yandex.com). .PP Yandex paths may be as deep as required, eg \f[C]remote:directory/subdirectory\f[]. @@ -5604,7 +5979,7 @@ y/e/d>\ y \f[] .fi .PP -See the remote setup docs (http://rclone.org/remote_setup/) for how to +See the remote setup docs (https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available. .PP Note that rclone runs a webserver on your local machine to collect the @@ -5648,6 +6023,11 @@ excess files in the path. rclone\ sync\ /home/local/directory\ remote:directory \f[] .fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. .SS Modified time .PP Modified times are supported and are stored accurate to 1 ns in custom @@ -5703,23 +6083,25 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "dropbox" \ 5\ /\ Encrypt/Decrypt\ a\ remote \ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ 6\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) \ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive +\ 8\ /\ Google\ Drive \ \ \ \\\ "drive" -\ 8\ /\ Hubic +\ 9\ /\ Hubic \ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk +10\ /\ Local\ Disk \ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive +11\ /\ Microsoft\ OneDrive \ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) \ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection +13\ /\ SSH/SFTP\ Connection \ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk +14\ /\ Yandex\ Disk \ \ \ \\\ "yandex" -Storage>\ 12\ \ +Storage>\ sftp SSH\ host\ to\ connect\ to Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Connect\ to\ example.com @@ -5727,7 +6109,7 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value host>\ example.com SSH\ username,\ leave\ blank\ for\ current\ username,\ ncw user>\ -SSH\ port +SSH\ port,\ leave\ blank\ to\ use\ default\ (22) port>\ SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent y)\ Yes\ type\ in\ my\ own\ password @@ -5792,6 +6174,8 @@ Modified times are used in syncing and are fully supported. .PP SFTP does not support any checksums. .PP +The only ssh agent supported under Windows is Putty\[aq]s pagent. +.PP SFTP isn\[aq]t supported under plan9 until this issue (https://github.com/pkg/sftp/issues/156) is fixed. .PP @@ -5873,6 +6257,8 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "off" \ 2\ /\ Encrypt\ the\ filenames\ see\ the\ docs\ for\ the\ details. \ \ \ \\\ "standard" +\ 3\ /\ Very\ simple\ filename\ obfuscation. +\ \ \ \\\ "obfuscate" filename_encryption>\ 2 Password\ or\ pass\ phrase\ for\ encryption. y)\ Yes\ type\ in\ my\ own\ password @@ -6046,6 +6432,33 @@ identical files names will have identical uploaded names .IP \[bu] 2 can use shortcuts to shorten the directory recursion .PP +Obfuscation +.PP +This is a simple "rotate" of the filename, with each file having a rot +distance based on the filename. +We store the distance at the beginning of the filename. +So a file called "hello" may become "53.jgnnq" +.PP +This is not a strong encryption of filenames, but it may stop automated +scanning tools from picking up on filename patterns. +As such it\[aq]s an intermediate between "off" and "standard". +The advantage is that it allows for longer path segment names. +.PP +There is a possibility with some unicode based filenames that the +obfuscation is weak and may map lower case characters to upper case +equivalents. +You can not rely on this for strong protection. +.IP \[bu] 2 +file names very lightly obfuscated +.IP \[bu] 2 +file names can be longer than standard encryption +.IP \[bu] 2 +can use sub paths and copy single files +.IP \[bu] 2 +directory structure visibile +.IP \[bu] 2 +identical files names will have identical uploaded names +.PP Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. @@ -6217,6 +6630,152 @@ If the user doesn\[aq]t supply a salt then rclone uses an internal one. \f[C]scrypt\f[] makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt. +.SS FTP +.PP +FTP is the File Transfer Protocol. +FTP support is provided using the +github.com/jlaffaye/ftp (https://godoc.org/github.com/jlaffaye/ftp) +package. +.PP +Here is an example of making an FTP configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +An FTP remote only needs a host together with and a username and a +password. +With anonymous FTP server, you will need to use \f[C]anonymous\f[] as +username and your email address as the password. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/r/c/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ FTP\ Connection\ +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 8\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 9\ /\ Hubic +\ \ \ \\\ "hubic" +10\ /\ Local\ Disk +\ \ \ \\\ "local" +11\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +13\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +14\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ ftp +FTP\ host\ to\ connect\ to +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Connect\ to\ ftp.example.com +\ \ \ \\\ "ftp.example.com" +host>\ ftp.example.com +FTP\ username,\ leave\ blank\ for\ current\ username,\ ncw +user> +FTP\ port,\ leave\ blank\ to\ use\ default\ (21) +port> +FTP\ password +y)\ Yes\ type\ in\ my\ own\ password +g)\ Generate\ random\ password +y/g>\ y +Enter\ the\ password: +password: +Confirm\ the\ password: +password: +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +host\ =\ ftp.example.com +user\ =\ +port\ = +pass\ =\ ***\ ENCRYPTED\ *** +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all directories in the home directory +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new directory +.IP +.nf +\f[C] +rclone\ mkdir\ remote:path/to/directory +\f[] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone\ ls\ remote:path/to/directory +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote directory, deleting +any excess files in the directory. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:directory +\f[] +.fi +.SS Modified time +.PP +FTP does not support modified times. +Any times you see on the server will be time of upload. +.SS Checksums +.PP +FTP does not support any checksums. +.SS Limitations +.PP +Note that since FTP isn\[aq]t HTTP based the following flags don\[aq]t +work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[], +\f[C]\-\-dump\-auth\f[] +.PP +Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but +\f[C]\-\-contimeout\f[] is). +.PP +FTP could support server side move but doesn\[aq]t yet. .SS Local Filesystem .PP Local paths are specified as normal filesystem paths, eg @@ -6350,6 +6909,20 @@ $\ rclone\ \-L\ ls\ /tmp/a \ \ \ \ \ \ \ \ 6\ b/one \f[] .fi +.SS \-\-no\-local\-unicode\-normalization +.PP +By default rclone normalizes (NFC) the unicode representation of +filenames and directories. +This flag disables that normalization and uses the same representation +as the local filesystem. +.PP +This can be useful if you need to retain the local unicode +representation and you are using a cloud provider which supports +unnormalized names (e.g. +S3 or ACD). +.PP +This should also work with any provider if you are using crypt and have +file name encryption (the default) or obfuscation turned on. .SS \-\-one\-file\-system, \-x .PP This tells rclone to stay in the filesystem specified by the root and @@ -7300,7 +7873,7 @@ Swift .IP \[bu] 2 Fix sync for chunked files .IP \[bu] 2 -One Drive +OneDrive .IP \[bu] 2 Re\-enable server side copy .IP \[bu] 2 @@ -7338,7 +7911,7 @@ This could have caused data loss for files > 5GB in size .IP \[bu] 2 Use ContentType from Object to avoid lookups in listings .IP \[bu] 2 -One Drive +OneDrive .IP \[bu] 2 disable server side copy as it seems to be broken at Microsoft .RE @@ -7348,7 +7921,7 @@ v1.24 \- 2015\-11\-07 .IP \[bu] 2 New features .IP \[bu] 2 -Add support for Microsoft One Drive +Add support for Microsoft OneDrive .IP \[bu] 2 Add \f[C]\-\-no\-check\-certificate\f[] option to disable server certificate verification @@ -7861,13 +8434,13 @@ If you want to find this file, the simplest way is to run \f[C]rclone\ \-h\f[] and look at the help for the \f[C]\-\-config\f[] flag which will tell you where it is. .PP -See the remote setup docs (http://rclone.org/remote_setup/) for more +See the remote setup docs (https://rclone.org/remote_setup/) for more info. .SS How do I configure rclone on a remote / headless box with no browser? .PP This has now been documented in its own remote setup -page (http://rclone.org/remote_setup/). +page (https://rclone.org/remote_setup/). .SS Can rclone sync directly from drive to s3 .PP Rclone can sync between two remote cloud storage systems just fine. @@ -8018,7 +8591,7 @@ COPYING file included with the source code). .IP .nf \f[C] -Copyright\ (C)\ 2012\ by\ Nick\ Craig\-Wood\ http://www.craig\-wood.com/nick/ +Copyright\ (C)\ 2012\ by\ Nick\ Craig\-Wood\ https://www.craig\-wood.com/nick/ Permission\ is\ hereby\ granted,\ free\ of\ charge,\ to\ any\ person\ obtaining\ a\ copy of\ this\ software\ and\ associated\ documentation\ files\ (the\ "Software"),\ to\ deal @@ -8046,7 +8619,7 @@ Nick Craig\-Wood .IP \[bu] 2 Alex Couper .IP \[bu] 2 -Leonid Shalupov +Leonid Shalupov .IP \[bu] 2 Shimon Doodkin .IP \[bu] 2 @@ -8145,6 +8718,51 @@ Jack Schmidt Dedsec1 .IP \[bu] 2 Hisham Zarka +.IP \[bu] 2 +Jérôme Vizcaino +.IP \[bu] 2 +Mike Tesch +.IP \[bu] 2 +Marvin Watson +.IP \[bu] 2 +Danny Tsai +.IP \[bu] 2 +Yoni Jah +.IP \[bu] 2 +Stephen Harris +.IP \[bu] 2 +Ihor Dvoretskyi +.IP \[bu] 2 +Jon Craton +.IP \[bu] 2 +Hraban Luyat +.IP \[bu] 2 +Michael Ledin +.IP \[bu] 2 +Martin Kristensen +.IP \[bu] 2 +Too Much IO +.IP \[bu] 2 +Anisse Astier +.IP \[bu] 2 +Zahiar Ahmed +.IP \[bu] 2 +Igor Kharin +.IP \[bu] 2 +Bill Zissimopoulos +.IP \[bu] 2 +Bob Potter +.IP \[bu] 2 +Steven Lu +.IP \[bu] 2 +Sjur Fredriksen +.IP \[bu] 2 +Ruwbin +.IP \[bu] 2 +Fabian Möller +.IP \[bu] 2 +Edward Q. +Bridges .SH Contact the rclone project .SS Forum .PP