diff --git a/MANUAL.html b/MANUAL.html index fd1b9026e..f5e77b908 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,25 +12,36 @@

Rclone

Logo

-

Rclone is a command line program to sync files and directories to and from

+

Rclone is a command line program to sync files and directories to and from:

Features

@@ -84,7 +95,9 @@ sudo mandb

Unzip the download and cd to the extracted folder.

unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64

Move rclone to your $PATH. You will be prompted for your password.

-
sudo mv rclone /usr/local/bin/
+
sudo mkdir -p /usr/local/bin
+sudo mv rclone /usr/local/bin/
+

(the mkdir command is safe to run, even if the directory already exists).

Remove the leftover files.

cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip

Run rclone config to setup. See rclone config docs for more details.

@@ -104,58 +117,30 @@ sudo mandb
    - hosts: rclone-hosts
       roles:
           - rclone
-

Installation with snap

-

Quickstart

- -

See below for how to install snapd if it isn't already installed

-

Arch

-
sudo pacman -S snapd
-

enable the snapd systemd service:

-
sudo systemctl enable --now snapd.socket
-

Debian / Ubuntu

-
sudo apt install snapd
-

Fedora

-
sudo dnf copr enable zyga/snapcore
-sudo dnf install snapd
-

enable the snapd systemd service:

-
sudo systemctl enable --now snapd.service
-

SELinux support is in beta, so currently:

-
sudo setenforce 0
-

to persist, edit /etc/selinux/config to set SELINUX=permissive and reboot.

-

Gentoo

-

Install the gentoo-snappy overlay.

-

OpenEmbedded/Yocto

-

Install the snap meta layer.

-

openSUSE

-
sudo zypper addrepo https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy
-sudo zypper install snapd
-

OpenWrt

-

Enable the snap-openwrt feed.

Configure

First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.)

The easiest way to make the config is to run rclone with the config option:

rclone config

See the following for detailed instructions for

Usage

Rclone syncs a directory tree from one storage system to another.

@@ -171,8 +156,16 @@ rclone sync /local/path remote:path # syncs /local/path to the remoterclone config

Enter an interactive configuration session.

Synopsis

-

Enter an interactive configuration session.

-
rclone config
+

rclone config enters an interactive configuration sessions where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

+

Additional functions:

+ +
rclone config [function] [flags]
+

Options

+
  -h, --help   help for config

rclone copy

Copy files from source to dest, skipping already copied

Synopsis

@@ -192,7 +185,9 @@ destpath/two.txt destpath/sourcepath/two.txt

If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.

See the --no-traverse option for controlling whether rclone lists the destination directory or not.

-
rclone copy source:path dest:path
+
rclone copy source:path dest:path [flags]
+

Options

+
  -h, --help   help for copy

rclone sync

Make source and dest identical, modifying destination only.

Synopsis

@@ -201,7 +196,9 @@ destpath/sourcepath/two.txt

Note that files in the destination won't be deleted if there were any errors at any point.

It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure.

If dest:path doesn't exist, it is created and the source:path contents go there.

-
rclone sync source:path dest:path
+
rclone sync source:path dest:path [flags]
+

Options

+
  -h, --help   help for sync

rclone move

Move files from source to dest.

Synopsis

@@ -209,7 +206,9 @@ destpath/sourcepath/two.txt

If no filters are in use and if possible this will server side move source:path into dest:path. After this source:path will no longer longer exist.

Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path.

Important: Since this can cause data loss, test first with the --dry-run flag.

-
rclone move source:path dest:path
+
rclone move source:path dest:path [flags]
+

Options

+
  -h, --help   help for move

rclone delete

Remove the contents of path.

Synopsis

@@ -221,22 +220,30 @@ rclone --dry-run --min-size 100M delete remote:path

Then delete

rclone --min-size 100M delete remote:path

That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.

-
rclone delete remote:path
+
rclone delete remote:path [flags]
+

Options

+
  -h, --help   help for delete

rclone purge

Remove the path and all of its contents.

Synopsis

Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files.

-
rclone purge remote:path
+
rclone purge remote:path [flags]
+

Options

+
  -h, --help   help for purge

rclone mkdir

Make the path if it doesn't already exist.

Synopsis

Make the path if it doesn't already exist.

-
rclone mkdir remote:path
+
rclone mkdir remote:path [flags]
+

Options

+
  -h, --help   help for mkdir

rclone rmdir

Remove the path if empty.

Synopsis

Remove the path. Note that you can't remove a path with objects in it, use purge for that.

-
rclone rmdir remote:path
+
rclone rmdir remote:path [flags]
+

Options

+
  -h, --help   help for rmdir

rclone check

Checks the files in the source and destination match.

Synopsis

@@ -244,52 +251,70 @@ rclone --dry-run --min-size 100M delete remote:path

If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.

If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.

rclone check source:path dest:path [flags]
-

Options

-
      --download   Check by downloading rather than with hash.
+

Options

+
      --download   Check by downloading rather than with hash.
+  -h, --help       help for check

rclone ls

List all the objects in the path with size and path.

Synopsis

List all the objects in the path with size and path.

-
rclone ls remote:path
+
rclone ls remote:path [flags]
+

Options

+
  -h, --help   help for ls

rclone lsd

List all directories/containers/buckets in the path.

Synopsis

List all directories/containers/buckets in the path.

-
rclone lsd remote:path
+
rclone lsd remote:path [flags]
+

Options

+
  -h, --help   help for lsd

rclone lsl

List all the objects path with modification time, size and path.

Synopsis

List all the objects path with modification time, size and path.

-
rclone lsl remote:path
+
rclone lsl remote:path [flags]
+

Options

+
  -h, --help   help for lsl

rclone md5sum

Produces an md5sum file for all the objects in the path.

Synopsis

Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.

-
rclone md5sum remote:path
+
rclone md5sum remote:path [flags]
+

Options

+
  -h, --help   help for md5sum

rclone sha1sum

Produces an sha1sum file for all the objects in the path.

Synopsis

Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.

-
rclone sha1sum remote:path
+
rclone sha1sum remote:path [flags]
+

Options

+
  -h, --help   help for sha1sum

rclone size

Prints the total size and number of objects in remote:path.

Synopsis

Prints the total size and number of objects in remote:path.

-
rclone size remote:path
+
rclone size remote:path [flags]
+

Options

+
  -h, --help   help for size

rclone version

Show the version number.

Synopsis

Show the version number.

-
rclone version
+
rclone version [flags]
+

Options

+
  -h, --help   help for version

rclone cleanup

Clean up the remote if possible

Synopsis

Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.

-
rclone cleanup remote:path
+
rclone cleanup remote:path [flags]
+

Options

+
  -h, --help   help for cleanup

rclone dedupe

Interactively find duplicate files delete/rename them.

Synopsis

-

By default dedup interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.

+

By default dedupe interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.

+

In the first pass it will merge directories with the same name. It will do this iteratively until all the identical directories have been merged.

The dedupe command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive. You can use --dry-run to see what would happen without doing anything.

Here is an example run.

Before - with duplicates

@@ -347,13 +372,16 @@ two-3.txt: renamed from: two.txt

Or

rclone dedupe rename "drive:Google Photos"
rclone dedupe [mode] remote:path [flags]
-

Options

-
      --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
+

Options

+
      --dedupe-mode string   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
+  -h, --help                 help for dedupe

rclone authorize

Remote authorization.

Synopsis

Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

-
rclone authorize
+
rclone authorize [flags]
+

Options

+
  -h, --help   help for authorize

rclone cat

Concatenates any files and sends them to stdout.

Synopsis

@@ -366,10 +394,11 @@ two-3.txt: renamed from: two.txt
rclone --include "*.txt" cat remote:path/to/dir

Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.

rclone cat remote:path [flags]
-

Options

+

Options

      --count int    Only print N characters. (default -1)
       --discard      Discard the output instead of printing.
       --head int     Only print the first N characters.
+  -h, --help         help for cat
       --offset int   Start printing at offset N (or from end if -ve).
       --tail int     Only print the last N characters.

rclone copyto

@@ -387,7 +416,9 @@ if src is directory copy it to dst, overwriting existing files if they exist see copy command for full details

This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.

-
rclone copyto source:path dest:path
+
rclone copyto source:path dest:path [flags]
+

Options

+
  -h, --help   help for copyto

rclone cryptcheck

Cryptcheck checks the integrity of a crypted remote.

Synopsis

@@ -399,40 +430,74 @@ if src is directory

You can use it like this also, but that will involve downloading all the files in remote:path.

rclone cryptcheck remote:path encryptedremote:path

After it has run it will log the status of the encryptedremote:.

-
rclone cryptcheck remote:path cryptedremote:path
+
rclone cryptcheck remote:path cryptedremote:path [flags]
+

Options

+
  -h, --help   help for cryptcheck
+

rclone cryptdecode

+

Cryptdecode returns unencrypted file names.

+

Synopsis

+

rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

+

use it like this

+
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
+
rclone cryptdecode encryptedremote: encryptedfilename [flags]
+

Options

+
  -h, --help   help for cryptdecode

rclone dbhashsum

Produces a Dropbbox hash file for all the objects in the path.

-

Synopsis

-

Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.

-
rclone dbhashsum remote:path
-

rclone genautocomplete

-

Output bash completion script for rclone.

Synopsis

+

Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.

+
rclone dbhashsum remote:path [flags]
+

Options

+
  -h, --help   help for dbhashsum
+

rclone genautocomplete

+

Output completion script for a given shell.

+

Synopsis

+

Generates a shell completion script for rclone. Run with --help to list the supported shells.

+

Options

+
  -h, --help   help for genautocomplete
+

rclone genautocomplete bash

+

Output bash completion script for rclone.

+

Synopsis

Generates a bash shell autocompletion script for rclone.

This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg

-
sudo rclone genautocomplete
+
sudo rclone genautocomplete bash

Logout and login again to use the autocompletion scripts, or source them directly

. /etc/bash_completion

If you supply a command line argument the script will be written there.

-
rclone genautocomplete [output_file]
+
rclone genautocomplete bash [output_file] [flags]
+

Options

+
  -h, --help   help for bash
+

rclone genautocomplete zsh

+

Output zsh completion script for rclone.

+

Synopsis

+

Generates a zsh autocompletion script for rclone.

+

This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg

+
sudo rclone genautocomplete zsh
+

Logout and login again to use the autocompletion scripts, or source them directly

+
autoload -U compinit && compinit
+

If you supply a command line argument the script will be written there.

+
rclone genautocomplete zsh [output_file] [flags]
+

Options

+
  -h, --help   help for zsh

rclone gendocs

Output markdown docs for rclone to the directory supplied.

-

Synopsis

+

Synopsis

This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

rclone gendocs output_directory [flags]
-

Options

+

Options

  -h, --help   help for gendocs

rclone listremotes

List all the remotes in the config file.

-

Synopsis

+

Synopsis

rclone listremotes lists all the available remotes from the config file.

When uses with the -l flag it lists the types too.

rclone listremotes [flags]
-

Options

-
  -l, --long   Show the type as well as names.
+

Options

+
  -h, --help   help for listremotes
+  -l, --long   Show the type as well as names.

rclone lsjson

List directories and objects in the path in JSON format.

-

Synopsis

+

Synopsis

List directories and objects in the path in JSON format.

The output is an array of Items, where each Item looks like this

{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }

@@ -441,13 +506,14 @@ if src is directory

The time is in RFC3339 format with nanosecond precision.

The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.

rclone lsjson remote:path [flags]
-

Options

+

Options

      --hash         Include hashes in the output (may take longer).
+  -h, --help         help for lsjson
       --no-modtime   Don't read the modification time (can speed things up).
   -R, --recursive    Recurse into the listing.

rclone mount

Mount the remote as a mountpoint. EXPERIMENTAL

-

Synopsis

+

Synopsis

rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

This is EXPERIMENTAL - use with care.

First set up your remote using rclone config. Check it works with rclone ls etc.

@@ -461,6 +527,12 @@ if src is directory fusermount -u /path/to/local/mount # OS X umount /path/to/local/mount +

Installing on Windows

+

To run rclone mount on Windows, you will need to download and install WinFsp.

+

WinFsp is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.

+

Windows caveats

+

Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.

+

The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system.

Limitations

This can only write files seqentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount.

The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.

@@ -473,17 +545,8 @@ umount /path/to/local/mount

Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.

Alternatively, you can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)
-

Bugs

-
rclone mount remote:path /path/to/mountpoint [flags]
-

Options

+

Options

      --allow-non-empty           Allow mounting over a non-empty directory.
       --allow-other               Allow access to other users.
       --allow-root                Allow access to root user.
@@ -492,6 +555,7 @@ umount /path/to/local/mount
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for mount --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). @@ -504,7 +568,7 @@ umount /path/to/local/mount --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

rclone moveto

Move file or directory from source to dest.

-

Synopsis

+

Synopsis

If source:path is a file or directory then it moves it to a file or directory named dest:path.

This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.

So

@@ -518,10 +582,12 @@ if src is directory see move command for full details

This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

Important: Since this can cause data loss, test first with the --dry-run flag.

-
rclone moveto source:path dest:path
+
rclone moveto source:path dest:path [flags]
+

Options

+
  -h, --help   help for moveto

rclone ncdu

Explore a remote with a text based user interface.

-

Synopsis

+

Synopsis

This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

Here are the keys - press '?' to toggle the help on and off

@@ -534,18 +600,76 @@ if src is directory ? to toggle help on and off q/ESC/c-C to quit

This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands.

-
rclone ncdu remote:path
+
rclone ncdu remote:path [flags]
+

Options

+
  -h, --help   help for ncdu

rclone obscure

Obscure password for use in the rclone.conf

-

Synopsis

+

Synopsis

Obscure password for use in the rclone.conf

-
rclone obscure password
+
rclone obscure password [flags]
+

Options

+
  -h, --help   help for obscure
+

rclone rcat

+

Copies standard input to file on remote.

+

Synopsis

+

rclone rcat reads from standard input (stdin) and copies it to a single remote file.

+
echo "hello world" | rclone rcat remote:path/to/file
+ffmpeg - | rclone rcat --checksum remote:path/to/file
+

If the remote file already exists, it will be overwritten.

+

rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.

+

Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination.

+
rclone rcat remote:path [flags]
+

Options

+
  -h, --help   help for rcat

rclone rmdirs

Remove empty directories under the path.

-

Synopsis

+

Synopsis

This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.

This is useful for tidying up remotes that rclone has left a lot of empty directories in.

-
rclone rmdirs remote:path
+
rclone rmdirs remote:path [flags]
+

Options

+
  -h, --help   help for rmdirs
+

rclone tree

+

List the contents of the remote in a tree like fashion.

+

Synopsis

+

rclone tree lists the contents of a remote in a similar way to the unix tree command.

+

For example

+
$ rclone tree remote:path
+/
+├── file1
+├── file2
+├── file3
+└── subdir
+    ├── file4
+    └── file5
+
+1 directories, 5 files
+

You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.

+

The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.

+
rclone tree remote:path [flags]
+

Options

+
  -a, --all             All files are listed (list . files too).
+  -C, --color           Turn colorization on always.
+  -d, --dirs-only       List directories only.
+      --dirsfirst       List directories before files (-U disables).
+      --full-path       Print the full path prefix for each file.
+  -h, --help            help for tree
+      --human           Print the size in a more human readable way.
+      --level int       Descend only level directories deep.
+  -D, --modtime         Print the date of last modification.
+  -i, --noindent        Don't print indentation lines.
+      --noreport        Turn off file/directory count at end of tree listing.
+  -o, --output string   Output to file instead of stdout.
+  -p, --protections     Print the protections for each file.
+  -Q, --quote           Quote filenames with double quotes.
+  -s, --size            Print the size in bytes of each file.
+      --sort string     Select sort: name,version,size,mtime,ctime.
+      --sort-ctime      Sort files by last status change time.
+  -t, --sort-modtime    Sort files by last modification time.
+  -r, --sort-reverse    Reverse the order of the sort.
+  -U, --unsorted        Leave files unsorted.
+      --version         Sort files alphanumerically by version.

Copying single files

rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.

For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this

@@ -588,7 +712,7 @@ if src is directory

This can be used when scripting to make aged backups efficiently, eg

rclone sync remote:current-backup remote:previous-backup
 rclone sync /path/to/files remote:current-backup
-

Options

+

Options

Rclone has a number of options to control its behaviour.

Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

@@ -600,6 +724,8 @@ rclone sync /path/to/files remote:current-backup
rclone sync /path/to/local remote:current --backup-dir remote:old

will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.

If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date.

+

--bind string

+

Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resoves to more than one IP address it will give an error.

--bwlimit=BANDWIDTH_SPEC

This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.

Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0 which means to not limit bandwidth.

@@ -634,6 +760,14 @@ rclone sync /path/to/files remote:current-backup

The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default.

--dedupe-mode MODE

Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.

+

--disable FEATURE,FEATURE,...

+

This disables a comma separated list of optional features. For example to disable server side move and server side copy use:

+
--disable move,copy
+

The features can be put in in any case.

+

To see a list of which features can be disabled use:

+
--disable help
+

See the overview features and optional features to get an idea of which feature does what.

+

This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day).

-n, --dry-run

Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination.

--ignore-checksum

@@ -649,6 +783,11 @@ rclone sync /path/to/files remote:current-backup

-I, --ignore-times

Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.

Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).

+

--immutable

+

Treat source and destination files as immutable and disallow modification.

+

With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified.

+

Note that only commands which transfer files (e.g. sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.

+

This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.

--log-file=FILE

Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

--log-level LEVEL

@@ -811,6 +950,7 @@ export RCLONE_CONFIG_PASS

Dump HTTP headers - will contain sensitive info such as Authorization: headers - use --dump-headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only.

--dump-bodies

Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.

+

Note that the bodies are buffered in memory so don't use this for enormous files.

--dump-filters

Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.

--dump-headers

@@ -862,7 +1002,7 @@ export RCLONE_CONFIG_PASS

When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.

Environment Variables

Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

-

Options

+

Options

Every option in rclone can have its default set by environment variable.

To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

@@ -1066,15 +1206,17 @@ file2.avi

Add include/exclude rules from a file.

This flag can be repeated. See above for the order the flags are processed in.

Prepare a file like this filter-file.txt

-
# a sample exclude rule file
+
# a sample filter rule file
 - secret*.jpg
 + *.jpg
 + *.png
 + file2.avi
+- /dir/Trash/**
++ /dir/**
 # exclude everything else
 - *

Then use as --filter-from filter-file.txt. The rules are processed in the order that they are defined.

-

This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. Everything else will be excluded from the sync.

+

This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. It will also include everything in the directory dir at the root of the sync, except dir/Trash which it will exclude. Everything else will be excluded from the sync.

--files-from - Read list of source-file names

This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.

This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.

@@ -1164,46 +1306,6 @@ user2/stuff
-Google Drive -MD5 -Yes -No -Yes -R/W - - -Amazon S3 -MD5 -Yes -No -No -R/W - - -Openstack Swift -MD5 -Yes -No -No -R/W - - -Dropbox -DBHASH † -Yes -Yes -No -- - - -Google Cloud Storage -MD5 -Yes -No -No -R/W - - Amazon Drive MD5 No @@ -1211,16 +1313,8 @@ user2/stuff No R - -Microsoft OneDrive -SHA1 -Yes -Yes -No -R - -Hubic +Amazon S3 MD5 Yes No @@ -1236,6 +1330,102 @@ user2/stuff R/W +Box +SHA1 +Yes +Yes +No +- + + +Dropbox +DBHASH † +Yes +Yes +No +- + + +FTP +- +No +No +No +- + + +Google Cloud Storage +MD5 +Yes +No +No +R/W + + +Google Drive +MD5 +Yes +No +Yes +R/W + + +HTTP +- +No +No +No +R + + +Hubic +MD5 +Yes +No +No +R/W + + +Microsoft Azure Blob Storage +MD5 +Yes +No +No +R/W + + +Microsoft OneDrive +SHA1 +Yes +Yes +No +R + + +Openstack Swift +MD5 +Yes +No +No +R/W + + +QingStor +MD5 +No +No +No +R/W + + +SFTP +MD5, SHA1 ‡ +Yes +Depends +No +- + + Yandex Disk MD5 Yes @@ -1244,30 +1434,6 @@ user2/stuff R/W -SFTP -- -Yes -Depends -No -- - - -FTP -- -No -Yes -No -- - - -HTTP -- -No -Yes -No -R - - The local filesystem All Yes @@ -1278,10 +1444,10 @@ user2/stuff

Hash

-

The cloud storage system supports various hash types of the objects.
-The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

-

To use the checksum checks between filesystems they must support a common hash type.

+

The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

+

To use the verify checksums when transferring between cloud storage systems they must support a common hash type.

† Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.

+

‡ SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

ModTime

The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.

@@ -1315,17 +1481,19 @@ The hashes are used when transferring data as an integrity check and can be spec DirMove CleanUp ListR +StreamUpload -Google Drive -Yes +Amazon Drive Yes +No Yes Yes No #575 No +No Amazon S3 @@ -1335,59 +1503,6 @@ The hashes are used when transferring data as an integrity check and can be spec No No Yes - - -Openstack Swift -Yes † -Yes -No -No -No -Yes - - -Dropbox -Yes -Yes -Yes -Yes -No #575 -No - - -Google Cloud Storage -Yes -Yes -No -No -No -Yes - - -Amazon Drive -Yes -No -Yes -Yes -No #575 -No - - -Microsoft OneDrive -Yes -Yes -Yes -No #197 -No #575 -No - - -Hubic -Yes † -Yes -No -No -No Yes @@ -1398,24 +1513,27 @@ The hashes are used when transferring data as an integrity check and can be spec No Yes Yes +Yes -Yandex Disk +Box +Yes +Yes +Yes Yes -No -No -No No #575 +No Yes -SFTP -No -No +Dropbox Yes Yes +Yes +Yes +No #575 No -No +Yes FTP @@ -1425,6 +1543,27 @@ The hashes are used when transferring data as an integrity check and can be spec Yes No No +Yes + + +Google Cloud Storage +Yes +Yes +No +No +No +Yes +Yes + + +Google Drive +Yes +Yes +Yes +Yes +Yes +No +Yes HTTP @@ -1434,8 +1573,79 @@ The hashes are used when transferring data as an integrity check and can be spec No No No +No +Hubic +Yes † +Yes +No +No +No +Yes +Yes + + +Microsoft Azure Blob Storage +Yes +Yes +No +No +No +Yes +No + + +Microsoft OneDrive +Yes +Yes +Yes +No #197 +No #575 +No +No + + +Openstack Swift +Yes † +Yes +No +No +No +Yes +Yes + + +QingStor +No +Yes +No +No +No +Yes +No + + +SFTP +No +No +Yes +Yes +No +No +Yes + + +Yandex Disk +Yes +No +No +No +Yes +Yes +Yes + + The local filesystem Yes No @@ -1443,6 +1653,7 @@ The hashes are used when transferring data as an integrity check and can be spec Yes No No +Yes @@ -1462,10 +1673,15 @@ The hashes are used when transferring data as an integrity check and can be spec

If the server can't do CleanUp then rclone cleanup will return an error.

ListR

The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details.

-

Google Drive

-

Paths are specified as drive:path

-

Drive paths may be as deep as required, eg drive:directory/subdirectory.

-

The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.

+

StreamUpload

+

Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat.

+

Amazon Drive

+

Paths are specified as remote:path

+

Paths may be as deep as required, eg remote:directory/subdirectory.

+

The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.

+

The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.

+

NB rclone doesn't not currently have its own Amazon Drive credentials (see the forum for why) so you will either need to have your own client_id and client_secret with Amazon Drive, or use a a third party ouath proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.

+

Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.

Here is an example of how to make a remote called remote. First run:

 rclone config

This will guide you through an interactive setup process:

@@ -1507,15 +1723,20 @@ Choose a number from below, or type in your own value \ "sftp" 14 / Yandex Disk \ "yandex" -Storage> 8 -Google Application Client Id - leave blank normally. -client_id> -Google Application Client Secret - leave blank normally. -client_secret> +Storage> 1 +Amazon Application Client Id - required. +client_id> your client ID goes here +Amazon Application Client Secret - required. +client_secret> your client secret goes here +Auth server URL - leave blank to use Amazon's. +auth_url> Optional auth URL +Token server url - leave blank to use Amazon's. +token_url> Optional token URL Remote config +Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure - * Say N if you are working on a remote or headless machine or Y didn't work + * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y @@ -1523,238 +1744,51 @@ If your browser doesn't open automatically go to the following link: http:// Log in and authorize rclone for access Waiting for code... Got code -Configure this as a team drive? -y) Yes -n) No -y/n> n -------------------- [remote] -client_id = -client_secret = -token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} +client_id = your client ID goes here +client_secret = your client secret goes here +auth_url = Optional auth URL +token_url = Optional token URL +token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -

Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

-

You can then use it like this,

-

List directories in top level of your drive

+

See the remote setup docs for how to set it up on a machine with no Internet browser available.

+

Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

+

Once configured you can then use rclone like this,

+

List directories in top level of your Amazon Drive

rclone lsd remote:
-

List all the files in your drive

+

List all the files in your Amazon Drive

rclone ls remote:
-

To copy a local directory to a drive directory called backup

+

To copy a local directory to an Amazon Drive directory called backup

rclone copy /home/source remote:backup
-

Team drives

-

If you want to configure the remote to point to a Google Team Drive then answer y to the question Configure this as a team drive?.

-

This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.

-

For example:

-
Configure this as a team drive?
-y) Yes
-n) No
-y/n> y
-Fetching team drive list...
-Choose a number from below, or type in your own value
- 1 / Rclone Test
-   \ "xxxxxxxxxxxxxxxxxxxx"
- 2 / Rclone Test 2
-   \ "yyyyyyyyyyyyyyyyyyyy"
- 3 / Rclone Test 3
-   \ "zzzzzzzzzzzzzzzzzzzz"
-Enter a Team Drive ID> 1
---------------------
-[remote]
-client_id = 
-client_secret = 
-token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
-team_drive = xxxxxxxxxxxxxxxxxxxx
---------------------
-y) Yes this is OK
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-

Modified time

-

Google drive stores modification times accurate to 1 ms.

-

Revisions

-

Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file.

-

Revisions follow the standard google policy which at time of writing was

- +

Modified time and MD5SUMs

+

Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.

+

It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.

Deleting files

-

By default rclone will delete files permanently when requested. If sending them to the trash is required instead then use the --drive-use-trash flag.

+

Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.

+

Using with non .com Amazon accounts

+

Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.

Specific options

Here are the command line options specific to this cloud storage system.

-

--drive-auth-owner-only

-

Only consider files owned by the authenticated user.

-

--drive-chunk-size=SIZE

-

Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.

-

Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.

-

Reducing this will reduce memory usage but decrease performance.

-

--drive-auth-owner-only

-

Only consider files owned by the authenticated user.

-

--drive-formats

-

Google documents can only be exported from Google drive. When rclone downloads a Google doc it chooses a format to download depending upon this setting.

-

By default the formats are docx,xlsx,pptx,svg which are a sensible default for an editable document.

-

When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.

-

If you prefer an archive copy then you might use --drive-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt,odp.

-

Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.

-

Here are the possible extensions with their corresponding mime types.

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ExtensionMime TypeDescription
csvtext/csvStandard CSV format for Spreadsheets
docapplication/mswordMicosoft Office Document
docxapplication/vnd.openxmlformats-officedocument.wordprocessingml.documentMicrosoft Office Document
epubapplication/epub+zipE-book format
htmltext/htmlAn HTML Document
jpgimage/jpegA JPEG Image File
odpapplication/vnd.oasis.opendocument.presentationOpenoffice Presentation
odsapplication/vnd.oasis.opendocument.spreadsheetOpenoffice Spreadsheet
odsapplication/x-vnd.oasis.opendocument.spreadsheetOpenoffice Spreadsheet
odtapplication/vnd.oasis.opendocument.textOpenoffice Document
pdfapplication/pdfAdobe PDF Format
pngimage/pngPNG Image Format
pptxapplication/vnd.openxmlformats-officedocument.presentationml.presentationMicrosoft Office Powerpoint
rtfapplication/rtfRich Text Format
svgimage/svg+xmlScalable Vector Graphics Format
tsvtext/tab-separated-valuesStandard TSV format for spreadsheets
txttext/plainPlain Text
xlsapplication/vnd.ms-excelMicrosoft Office Spreadsheet
xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheetMicrosoft Office Spreadsheet
zipapplication/zipA ZIP file of HTML, Images CSS
-

--drive-list-chunk int

-

Size of listing chunk 100-1000. 0 to disable. (default 1000)

-

--drive-shared-with-me

-

Only show files that are shared with me

-

--drive-skip-gdocs

-

Skip google documents in all listings. If given, gdocs practically become invisible to rclone.

-

--drive-trashed-only

-

Only show files that are in the trash. This will show trashed files in their original directory structure.

-

--drive-upload-cutoff=SIZE

-

File size cutoff for switching to chunked upload. Default is 8 MB.

-

--drive-use-trash

-

Send files to the trash instead of deleting permanently. Defaults to off, namely deleting files permanently.

+ +

Files this size or more will be downloaded via their tempLink. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.

+

To download files above this threshold, rclone requests a tempLink which downloads the file through a temporary URL directly from the underlying S3 storage.

+

--acd-upload-wait-per-gb=TIME

+

Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.

+

The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.

+

You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.

+

These values were determined empirically by observing lots of uploads of big files for a range of file sizes.

+

Upload with the -v flag to see more info about what rclone is doing in this situation.

Limitations

-

Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.

-

Duplicated files

-

Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.

-

Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

-

Use rclone dedupe to fix duplicated files.

-

Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.

-

Rclone appears to be re-copying files it shouldn't

-

There are two possible reasons for rclone to recopy files which haven't changed to Google Drive.

-

The first is the duplicated file issue above - run rclone dedupe and check your logs for duplicate object or directory messages.

-

The second is that sometimes Google reports different sizes for the Google Docs exports which will cause rclone to re-download Google Docs for no apparent reason. --ignore-size is a not very satisfactory work-around for this if it is causing you a lot of problems.

-

Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y"

-

This is the same problem as above. Google reports the google doc is one size, but rclone downloads a different size. Work-around with the --ignore-size flag or wait for rclone to retry the download which it will.

-

Making your own client_id

-

When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.

-

However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.

-

Here is how to create your own Google Drive client ID for rclone:

-
    -
  1. Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)

  2. -
  3. Select a project or create a new project.

  4. -
  5. Under Overview, Google APIs, Google Apps APIs, click "Drive API", then "Enable".

  6. -
  7. Click "Credentials" in the left-side panel (not "Go to credentials", which opens the wizard), then "Create credentials", then "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.

  8. -
  9. Choose an application type of "other", and click "Create". (the default name is fine)

  10. -
  11. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.

  12. -
-

(Thanks to @balazer on github for these instructions.)

+

Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

+

Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.

+

Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

+

At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.

+

Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

Amazon S3

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Here is an example of making an s3 configuration. First run

@@ -1943,7 +1977,7 @@ y/e/d> y
rclone sync /home/local/directory remote:bucket

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

-

Modified time

+

Modified time

The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

Multipart uploads

rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.

@@ -1953,13 +1987,14 @@ y/e/d> y

There are two ways to supply rclone with a set of AWS credentials. In order of precedence:

@@ -2002,6 +2037,10 @@ y/e/d> y
  • The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.
  • For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync.

    +

    Glacier

    +

    You can transition objects to glacier storage using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access the data you will see an error like below.

    +
    2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
    +

    In this case you need to restore the object(s) in question before using rclone.

    Specific options

    Here are the command line options specific to this cloud storage system.

    --s3-acl=STRING

    @@ -2116,677 +2155,104 @@ location_constraint = server_side_encryption =

    So once set up, for example to copy files into a bucket

    rclone copy /path/to/files minio:bucket
    -

    Swift

    -

    Swift refers to Openstack Object Storage. Commercial implementations of that being:

    - -

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

    -

    Here is an example of making a swift configuration. First run

    -
    rclone config
    -

    This will guide you through an interactive setup process.

    +

    Wasabi

    +

    Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost.

    +

    Wasabi provides an S3 interface which can be configured for use with rclone like this.

    No remotes found - make a new one
     n) New remote
     s) Set configuration password
     n/s> n
    -name> remote
    +name> wasabi
     Type of storage to configure.
     Choose a number from below, or type in your own value
      1 / Amazon Drive
        \ "amazon cloud drive"
      2 / Amazon S3 (also Dreamhost, Ceph, Minio)
        \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 7 / Google Drive
    -   \ "drive"
    - 8 / Hubic
    -   \ "hubic"
    - 9 / Local Disk
    -   \ "local"
    -10 / Microsoft OneDrive
    -   \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -12 / SSH/SFTP Connection
    -   \ "sftp"
    -13 / Yandex Disk
    -   \ "yandex"
    -Storage> 11
    -User name to log in.
    -user> user_name
    -API key or password.
    -key> password_or_api_key
    -Authentication URL for server.
    +[snip]
    +Storage> s3
    +Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
     Choose a number from below, or type in your own value
    - 1 / Rackspace US
    -   \ "https://auth.api.rackspacecloud.com/v1.0"
    - 2 / Rackspace UK
    -   \ "https://lon.auth.api.rackspacecloud.com/v1.0"
    - 3 / Rackspace v2
    -   \ "https://identity.api.rackspacecloud.com/v2.0"
    - 4 / Memset Memstore UK
    -   \ "https://auth.storage.memset.com/v1.0"
    - 5 / Memset Memstore UK v2
    -   \ "https://auth.storage.memset.com/v2.0"
    - 6 / OVH
    -   \ "https://auth.cloud.ovh.net/v2.0"
    -auth> 1
    -User domain - optional (v3 auth)
    -domain> Default
    -Tenant name - optional for v1 auth, required otherwise
    -tenant> tenant_name
    -Tenant domain - optional (v3 auth)
    -tenant_domain>
    -Region name - optional
    -region>
    -Storage URL - optional
    -storage_url>
    -AuthVersion - optional - set to (1,2,3) if your auth URL has no version
    -auth_version>
    -Remote config
    ---------------------
    -[remote]
    -user = user_name
    -key = password_or_api_key
    -auth = https://auth.api.rackspacecloud.com/v1.0
    -domain = Default
    -tenant =
    -tenant_domain =
    -region =
    -storage_url =
    -auth_version =
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    This remote is called remote and can now be used like this

    -

    See all containers

    -
    rclone lsd remote:
    -

    Make a new container

    -
    rclone mkdir remote:container
    -

    List the contents of a container

    -
    rclone ls remote:container
    -

    Sync /home/local/directory to the remote container, deleting any excess files in the container.

    -
    rclone sync /home/local/directory remote:container
    -

    Configuration from an Openstack credentials file

    -

    An Opentstack credentials file typically looks something something like this (without the comments)

    -
    export OS_AUTH_URL=https://a.provider.net/v2.0
    -export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
    -export OS_TENANT_NAME="1234567890123456"
    -export OS_USERNAME="123abc567xy"
    -echo "Please enter your OpenStack Password: "
    -read -sr OS_PASSWORD_INPUT
    -export OS_PASSWORD=$OS_PASSWORD_INPUT
    -export OS_REGION_NAME="SBG1"
    -if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
    -

    The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above.

    -
    [remote]
    -type = swift
    -user = $OS_USERNAME
    -key = $OS_PASSWORD
    -auth = $OS_AUTH_URL
    -tenant = $OS_TENANT_NAME
    -

    Note that you may (or may not) need to set region too - try without first.

    -

    --fast-list

    -

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Specific options

    -

    Here are the command line options specific to this cloud storage system.

    -

    --swift-chunk-size=SIZE

    -

    Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.

    -

    Modified time

    -

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    -

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    -

    Limitations

    -

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    -

    Troubleshooting

    -

    Rclone gives Failed to create file system for "remote:": Bad Request

    -

    Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.

    -

    So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.

    -

    This may also be caused by specifying the region when you shouldn't have (eg OVH).

    -

    Rclone gives Failed to create file system: Response didn't have storage storage url and auth token

    -

    This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

    -

    Dropbox

    -

    Paths are specified as remote:path

    -

    Dropbox paths may be as deep as required, eg remote:directory/subdirectory.

    -

    The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    n) New remote
    -d) Delete remote
    -q) Quit config
    -e/n/d/q> n
    -name> remote
    -Type of storage to configure.
    + 1 / Enter AWS credentials in the next step
    +   \ "false"
    + 2 / Get AWS credentials from the environment (env vars or IAM)
    +   \ "true"
    +env_auth> 1
    +AWS Access Key ID - leave blank for anonymous access or runtime credentials.
    +access_key_id> YOURACCESSKEY
    +AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    +secret_access_key> YOURSECRETACCESSKEY
    +Region to connect to.
     Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 7 / Google Drive
    -   \ "drive"
    - 8 / Hubic
    -   \ "hubic"
    - 9 / Local Disk
    -   \ "local"
    -10 / Microsoft OneDrive
    -   \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -12 / SSH/SFTP Connection
    -   \ "sftp"
    -13 / Yandex Disk
    -   \ "yandex"
    -Storage> 4
    -Dropbox App Key - leave blank normally.
    -app_key>
    -Dropbox App Secret - leave blank normally.
    -app_secret>
    -Remote config
    -Please visit:
    -https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
    -Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
    ---------------------
    -[remote]
    -app_key =
    -app_secret =
    -token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    You can then use it like this,

    -

    List directories in top level of your dropbox

    -
    rclone lsd remote:
    -

    List all the files in your dropbox

    -
    rclone ls remote:
    -

    To copy a local directory to a dropbox directory called backup

    -
    rclone copy /home/source remote:backup
    -

    Modified time and Hashes

    -

    Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

    -

    This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only or --checksum flag to stop it.

    -

    Dropbox supports its own hash type which is checked for all transfers.

    -

    Specific options

    -

    Here are the command line options specific to this cloud storage system.

    -

    --dropbox-chunk-size=SIZE

    -

    Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.

    -

    Limitations

    -

    Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.

    -

    If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

    -

    Google Cloud Storage

    -

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    -

    The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    n) New remote
    -d) Delete remote
    -q) Quit config
    -e/n/d/q> n
    -name> remote
    -Type of storage to configure.
    +   / The default endpoint - a good choice if you are unsure.
    + 1 | US Region, Northern Virginia or Pacific Northwest.
    +   | Leave location constraint empty.
    +   \ "us-east-1"
    +[snip]
    +region> us-east-1
    +Endpoint for S3 API.
    +Leave blank if using AWS to use the default endpoint for the region.
    +Specify if using an S3 clone such as Ceph.
    +endpoint> s3.wasabisys.com
    +Location constraint - must be set to match the Region. Used when creating buckets only.
     Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 7 / Google Drive
    -   \ "drive"
    - 8 / Hubic
    -   \ "hubic"
    - 9 / Local Disk
    -   \ "local"
    -10 / Microsoft OneDrive
    -   \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -12 / SSH/SFTP Connection
    -   \ "sftp"
    -13 / Yandex Disk
    -   \ "yandex"
    -Storage> 6
    -Google Application Client Id - leave blank normally.
    -client_id>
    -Google Application Client Secret - leave blank normally.
    -client_secret>
    -Project number optional - needed only for list/create/delete buckets - see your developer console.
    -project_number> 12345678
    -Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
    -service_account_file>
    -Access Control List for new objects.
    -Choose a number from below, or type in your own value
    - 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
    -   \ "authenticatedRead"
    - 2 / Object owner gets OWNER access, and project team owners get OWNER access.
    -   \ "bucketOwnerFullControl"
    - 3 / Object owner gets OWNER access, and project team owners get READER access.
    -   \ "bucketOwnerRead"
    - 4 / Object owner gets OWNER access [default if left blank].
    -   \ "private"
    - 5 / Object owner gets OWNER access, and project team members get access according to their roles.
    -   \ "projectPrivate"
    - 6 / Object owner gets OWNER access, and all Users get READER access.
    -   \ "publicRead"
    -object_acl> 4
    -Access Control List for new buckets.
    -Choose a number from below, or type in your own value
    - 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
    -   \ "authenticatedRead"
    - 2 / Project team owners get OWNER access [default if left blank].
    -   \ "private"
    - 3 / Project team members get access according to their roles.
    -   \ "projectPrivate"
    - 4 / Project team owners get OWNER access, and all Users get READER access.
    -   \ "publicRead"
    - 5 / Project team owners get OWNER access, and all Users get WRITER access.
    -   \ "publicReadWrite"
    -bucket_acl> 2
    -Location for the newly created buckets.
    -Choose a number from below, or type in your own value
    - 1 / Empty for default location (US).
    + 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
        \ ""
    - 2 / Multi-regional location for Asia.
    -   \ "asia"
    - 3 / Multi-regional location for Europe.
    -   \ "eu"
    - 4 / Multi-regional location for United States.
    -   \ "us"
    - 5 / Taiwan.
    -   \ "asia-east1"
    - 6 / Tokyo.
    -   \ "asia-northeast1"
    - 7 / Singapore.
    -   \ "asia-southeast1"
    - 8 / Sydney.
    -   \ "australia-southeast1"
    - 9 / Belgium.
    -   \ "europe-west1"
    -10 / London.
    -   \ "europe-west2"
    -11 / Iowa.
    -   \ "us-central1"
    -12 / South Carolina.
    -   \ "us-east1"
    -13 / Northern Virginia.
    -   \ "us-east4"
    -14 / Oregon.
    -   \ "us-west1"
    -location> 12
    -The storage class to use when storing objects in Google Cloud Storage.
    +[snip]
    +location_constraint> 
    +Canned ACL used when creating buckets and/or storing objects in S3.
    +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
    +Choose a number from below, or type in your own value
    + 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
    +   \ "private"
    +[snip]
    +acl> 
    +The server-side encryption algorithm used when storing this object in S3.
    +Choose a number from below, or type in your own value
    + 1 / None
    +   \ ""
    + 2 / AES256
    +   \ "AES256"
    +server_side_encryption> 
    +The storage class to use when storing objects in S3.
     Choose a number from below, or type in your own value
      1 / Default
        \ ""
    - 2 / Multi-regional storage class
    -   \ "MULTI_REGIONAL"
    - 3 / Regional storage class
    -   \ "REGIONAL"
    - 4 / Nearline storage class
    -   \ "NEARLINE"
    - 5 / Coldline storage class
    -   \ "COLDLINE"
    - 6 / Durable reduced availability storage class
    -   \ "DURABLE_REDUCED_AVAILABILITY"
    -storage_class> 5
    + 2 / Standard storage class
    +   \ "STANDARD"
    + 3 / Reduced redundancy storage class
    +   \ "REDUCED_REDUNDANCY"
    + 4 / Standard Infrequent Access storage class
    +   \ "STANDARD_IA"
    +storage_class> 
     Remote config
    -Use auto config?
    - * Say Y if not sure
    - * Say N if you are working on a remote or headless machine or Y didn't work
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
     --------------------
    -[remote]
    -type = google cloud storage
    -client_id =
    -client_secret =
    -token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
    -project_number = 12345678
    -object_acl = private
    -bucket_acl = private
    +[wasabi]
    +env_auth = false
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region = us-east-1
    +endpoint = s3.wasabisys.com
    +location_constraint = 
    +acl = 
    +server_side_encryption = 
    +storage_class = 
     --------------------
     y) Yes this is OK
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

    -

    This remote is called remote and can now be used like this

    -

    See all the buckets in your project

    -
    rclone lsd remote:
    -

    Make a new bucket

    -
    rclone mkdir remote:bucket
    -

    List the contents of a bucket

    -
    rclone ls remote:bucket
    -

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    -
    rclone sync /home/local/directory remote:bucket
    -

    Service Account support

    -

    You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

    -

    To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

    -

    To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.

    -

    --fast-list

    -

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    -

    Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.

    -

    Amazon Drive

    -

    Paths are specified as remote:path

    -

    Paths may be as deep as required, eg remote:directory/subdirectory.

    -

    The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.

    -

    The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.

    -

    NB rclone doesn't not currently have its own Amazon Drive credentials (see the forum for why) so you will either need to have your own client_id and client_secret with Amazon Drive, or use a a third party ouath proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.

    -

    Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found - make a new one
    -n) New remote
    -r) Rename remote
    -c) Copy remote
    -s) Set configuration password
    -q) Quit config
    -n/r/c/s/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / FTP Connection
    -   \ "ftp"
    - 7 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 8 / Google Drive
    -   \ "drive"
    - 9 / Hubic
    -   \ "hubic"
    -10 / Local Disk
    -   \ "local"
    -11 / Microsoft OneDrive
    -   \ "onedrive"
    -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -13 / SSH/SFTP Connection
    -   \ "sftp"
    -14 / Yandex Disk
    -   \ "yandex"
    -Storage> 1
    -Amazon Application Client Id - required.
    -client_id> your client ID goes here
    -Amazon Application Client Secret - required.
    -client_secret> your client secret goes here
    -Auth server URL - leave blank to use Amazon's.
    -auth_url> Optional auth URL
    -Token server url - leave blank to use Amazon's.
    -token_url> Optional token URL
    -Remote config
    -Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
    -Use auto config?
    - * Say Y if not sure
    - * Say N if you are working on a remote or headless machine
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -client_id = your client ID goes here
    -client_secret = your client secret goes here
    -auth_url = Optional auth URL
    -token_url = Optional token URL
    -token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    -

    List directories in top level of your Amazon Drive

    -
    rclone lsd remote:
    -

    List all the files in your Amazon Drive

    -
    rclone ls remote:
    -

    To copy a local directory to an Amazon Drive directory called backup

    -
    rclone copy /home/source remote:backup
    -

    Modified time and MD5SUMs

    -

    Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.

    -

    It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.

    -

    Deleting files

    -

    Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.

    -

    Using with non .com Amazon accounts

    -

    Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.

    -

    Specific options

    -

    Here are the command line options specific to this cloud storage system.

    - -

    Files this size or more will be downloaded via their tempLink. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.

    -

    To download files above this threshold, rclone requests a tempLink which downloads the file through a temporary URL directly from the underlying S3 storage.

    -

    --acd-upload-wait-per-gb=TIME

    -

    Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.

    -

    The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.

    -

    You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.

    -

    These values were determined empirically by observing lots of uploads of big files for a range of file sizes.

    -

    Upload with the -v flag to see more info about what rclone is doing in this situation.

    -

    Limitations

    -

    Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.

    -

    Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

    -

    At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.

    -

    Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

    -

    Microsoft OneDrive

    -

    Paths are specified as remote:path

    -

    Paths may be as deep as required, eg remote:directory/subdirectory.

    -

    The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found - make a new one
    -n) New remote
    -s) Set configuration password
    -n/s> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 7 / Google Drive
    -   \ "drive"
    - 8 / Hubic
    -   \ "hubic"
    - 9 / Local Disk
    -   \ "local"
    -10 / Microsoft OneDrive
    -   \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -12 / SSH/SFTP Connection
    -   \ "sftp"
    -13 / Yandex Disk
    -   \ "yandex"
    -Storage> 10
    -Microsoft App Client Id - leave blank normally.
    -client_id>
    -Microsoft App Client Secret - leave blank normally.
    -client_secret>
    -Remote config
    -Use auto config?
    - * Say Y if not sure
    - * Say N if you are working on a remote or headless machine
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -client_id =
    -client_secret =
    -token = {"access_token":"XXXXXX"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    -

    List directories in top level of your OneDrive

    -
    rclone lsd remote:
    -

    List all the files in your OneDrive

    -
    rclone ls remote:
    -

    To copy a local directory to an OneDrive directory called backup

    -
    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    -

    OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    -

    One drive supports SHA1 type hashes, so you can use --checksum flag.

    -

    Deleting files

    -

    Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

    -

    Specific options

    -

    Here are the command line options specific to this cloud storage system.

    -

    --onedrive-chunk-size=SIZE

    -

    Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.

    -

    --onedrive-upload-cutoff=SIZE

    -

    Cutoff for switching to chunked upload - must be <= 100MB. The default is 10MB.

    -

    Limitations

    -

    Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    Rclone only supports your default OneDrive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!

    -

    There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    -

    The largest allowed file size is 10GiB (10,737,418,240 bytes).

    -

    Hubic

    -

    Paths are specified as remote:path

    -

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

    -

    The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    n) New remote
    -s) Set configuration password
    -n/s> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 7 / Google Drive
    -   \ "drive"
    - 8 / Hubic
    -   \ "hubic"
    - 9 / Local Disk
    -   \ "local"
    -10 / Microsoft OneDrive
    -   \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -12 / SSH/SFTP Connection
    -   \ "sftp"
    -13 / Yandex Disk
    -   \ "yandex"
    -Storage> 8
    -Hubic Client Id - leave blank normally.
    -client_id>
    -Hubic Client Secret - leave blank normally.
    -client_secret>
    -Remote config
    -Use auto config?
    - * Say Y if not sure
    - * Say N if you are working on a remote or headless machine
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -client_id =
    -client_secret =
    -token = {"access_token":"XXXXXX"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    -

    List containers in the top level of your Hubic

    -
    rclone lsd remote:
    -

    List all the files in your Hubic

    -
    rclone ls remote:
    -

    To copy a local directory to an Hubic directory called backup

    -
    rclone copy /home/source remote:backup
    -

    If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default directory

    -
    rclone copy /home/source remote:default/backup
    -

    --fast-list

    -

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    -

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    -

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    -

    Note that Hubic wraps the Swift backend, so most of the properties of are the same.

    -

    Limitations

    -

    This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

    -

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    +

    This will leave the config file looking like this.

    +
    [wasabi]
    +env_auth = false
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region = us-east-1
    +endpoint = s3.wasabisys.com
    +location_constraint = 
    +acl = 
    +server_side_encryption = 
    +storage_class = 

    Backblaze B2

    B2 is Backblaze's cloud storage system.

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    @@ -2853,9 +2319,9 @@ y/e/d> y
    rclone ls remote:bucket

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    rclone sync /home/local/directory remote:bucket
    -

    --fast-list

    +

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    +

    Modified time

    The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.

    Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.

    SHA1 checksums

    @@ -2865,8 +2331,8 @@ y/e/d> y

    Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about --transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though.

    Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers of these in use at any moment, so this sets the upper limit on the memory used.

    Versions

    -

    When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will still be available.

    -

    Old versions of files are visible using the --b2-versions flag.

    +

    When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete flag which would permanently remove the file instead of hiding it.

    +

    Old versions of files, where available, are visible using the --b2-versions flag.

    If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff.

    When you purge a bucket, the current and the old versions will be deleted then the bucket will be deleted.

    However delete will cause the current versions of the files to become hidden old versions.

    @@ -2911,7 +2377,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test /b2api/v1/b2_finish_large_file

    B2 with crypt

    When using B2 with crypt files are encrypted into a temporary location and streamed from there. This is required to calculate the encrypted file's checksum before beginning the upload. On Windows the %TMPDIR% environment variable is used as the temporary location. If the file requires chunking, both the chunking and encryption will take place in memory.

    -

    Specific options

    +

    Specific options

    Here are the command line options specific to this cloud storage system.

    --b2-chunk-size valuee=SIZE

    When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of --transfers chunks in progress at once. 5,000,000 Bytes is the minimim size (default 96M).

    @@ -2940,289 +2406,10 @@ $ rclone -q --b2-versions ls b2:cleanup-test 15 one-v2016-07-02-155621-000.txt

    Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.

    Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them.

    -

    Yandex Disk

    -

    Yandex Disk is a cloud storage solution created by Yandex.

    -

    Yandex paths may be as deep as required, eg remote:directory/subdirectory.

    -

    Here is an example of making a yandex configuration. First run

    -
    rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found - make a new one
    -n) New remote
    -s) Set configuration password
    -n/s> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 7 / Google Drive
    -   \ "drive"
    - 8 / Hubic
    -   \ "hubic"
    - 9 / Local Disk
    -   \ "local"
    -10 / Microsoft OneDrive
    -   \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -12 / SSH/SFTP Connection
    -   \ "sftp"
    -13 / Yandex Disk
    -   \ "yandex"
    -Storage> 13
    -Yandex Client Id - leave blank normally.
    -client_id>
    -Yandex Client Secret - leave blank normally.
    -client_secret>
    -Remote config
    -Use auto config?
    - * Say Y if not sure
    - * Say N if you are working on a remote or headless machine
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -client_id =
    -client_secret =
    -token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    -

    See top level directories

    -
    rclone lsd remote:
    -

    Make a new directory

    -
    rclone mkdir remote:directory
    -

    List the contents of a directory

    -
    rclone ls remote:directory
    -

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    -
    rclone sync /home/local/directory remote:directory
    -

    --fast-list

    -

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    -

    Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

    -

    MD5 checksums

    -

    MD5 checksums are natively supported by Yandex Disk.

    -

    SFTP

    -

    SFTP is the Secure (or SSH) File Transfer Protocol.

    -

    It runs over SSH v2 and is standard with most modern SSH installations.

    -

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the users home directory.

    -

    Here is an example of making a SFTP configuration. First run

    -
    rclone config
    -

    This will guide you through an interactive setup process.

    -
    No remotes found - make a new one
    -n) New remote
    -s) Set configuration password
    -q) Quit config
    -n/s/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / FTP Connection
    -   \ "ftp"
    - 7 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 8 / Google Drive
    -   \ "drive"
    - 9 / Hubic
    -   \ "hubic"
    -10 / Local Disk
    -   \ "local"
    -11 / Microsoft OneDrive
    -   \ "onedrive"
    -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -13 / SSH/SFTP Connection
    -   \ "sftp"
    -14 / Yandex Disk
    -   \ "yandex"
    -15 / http Connection
    -   \ "http"
    -Storage> sftp
    -SSH host to connect to
    -Choose a number from below, or type in your own value
    - 1 / Connect to example.com
    -   \ "example.com"
    -host> example.com
    -SSH username, leave blank for current username, ncw
    -user> sftpuser
    -SSH port, leave blank to use default (22)
    -port> 
    -SSH password, leave blank to use ssh-agent.
    -y) Yes type in my own password
    -g) Generate random password
    -n) No leave this optional password blank
    -y/g/n> n
    -Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
    -key_file> 
    -Remote config
    ---------------------
    -[remote]
    -host = example.com
    -user = sftpuser
    -port = 
    -pass = 
    -key_file = 
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    This remote is called remote and can now be used like this

    -

    See all directories in the home directory

    -
    rclone lsd remote:
    -

    Make a new directory

    -
    rclone mkdir remote:path/to/directory
    -

    List the contents of a directory

    -
    rclone ls remote:path/to/directory
    -

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    -
    rclone sync /home/local/directory remote:directory
    -

    SSH Authentication

    -

    The SFTP remote supports 3 authentication methods

    - -

    Key files should be unencrypted PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa.

    -

    If you don't specify pass or key_file then it will attempt to contact an ssh-agent.

    -

    ssh-agent on macOS

    -

    Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg

    -
    eval `ssh-agent -s` && ssh-add -A
    -

    And then at the end of the session

    -
    eval `ssh-agent -k`
    -

    These commands can be used in scripts of course.

    -

    Modified time

    -

    Modified times are stored on the server to 1 second precision.

    -

    Modified times are used in syncing and are fully supported.

    -

    Limitations

    -

    SFTP does not support any checksums.

    -

    The only ssh agent supported under Windows is Putty's pagent.

    -

    SFTP isn't supported under plan9 until this issue is fixed.

    -

    Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    -

    Note that --timeout isn't supported (but --contimeout is).

    -

    FTP

    -

    FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.

    -

    Here is an example of making an FTP configuration. First run

    -
    rclone config
    -

    This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous as username and your email address as the password.

    -
    No remotes found - make a new one
    -n) New remote
    -r) Rename remote
    -c) Copy remote
    -s) Set configuration password
    -q) Quit config
    -n/r/c/s/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    -   \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    -   \ "s3"
    - 3 / Backblaze B2
    -   \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / FTP Connection 
    -   \ "ftp"
    - 7 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 8 / Google Drive
    -   \ "drive"
    - 9 / Hubic
    -   \ "hubic"
    -10 / Local Disk
    -   \ "local"
    -11 / Microsoft OneDrive
    -   \ "onedrive"
    -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -13 / SSH/SFTP Connection
    -   \ "sftp"
    -14 / Yandex Disk
    -   \ "yandex"
    -Storage> ftp
    -FTP host to connect to
    -Choose a number from below, or type in your own value
    - 1 / Connect to ftp.example.com
    -   \ "ftp.example.com"
    -host> ftp.example.com
    -FTP username, leave blank for current username, ncw
    -user>
    -FTP port, leave blank to use default (21)
    -port>
    -FTP password
    -y) Yes type in my own password
    -g) Generate random password
    -y/g> y
    -Enter the password:
    -password:
    -Confirm the password:
    -password:
    -Remote config
    ---------------------
    -[remote]
    -host = ftp.example.com
    -user = 
    -port =
    -pass = *** ENCRYPTED ***
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    This remote is called remote and can now be used like this

    -

    See all directories in the home directory

    -
    rclone lsd remote:
    -

    Make a new directory

    -
    rclone mkdir remote:path/to/directory
    -

    List the contents of a directory

    -
    rclone ls remote:path/to/directory
    -

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    -
    rclone sync /home/local/directory remote:directory
    -

    Modified time

    -

    FTP does not support modified times. Any times you see on the server will be time of upload.

    -

    Checksums

    -

    FTP does not support any checksums.

    -

    Limitations

    -

    Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    -

    Note that --timeout isn't supported (but --contimeout is).

    -

    FTP could support server side move but doesn't yet.

    -

    HTTP

    -

    The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)

    -

    Paths are specified as remote: or remote:path/to/dir.

    +

    Box

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, eg remote:directory/subdirectory.

    +

    The initial setup for Box involves getting a token from Box which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -3240,50 +2427,87 @@ Choose a number from below, or type in your own value \ "s3" 3 / Backblaze B2 \ "b2" - 4 / Dropbox + 4 / Box + \ "box" + 5 / Dropbox \ "dropbox" - 5 / Encrypt/Decrypt a remote + 6 / Encrypt/Decrypt a remote \ "crypt" - 6 / FTP Connection + 7 / FTP Connection \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) + 8 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 8 / Google Drive + 9 / Google Drive \ "drive" - 9 / Hubic +10 / Hubic \ "hubic" -10 / Local Disk +11 / Local Disk \ "local" -11 / Microsoft OneDrive +12 / Microsoft OneDrive \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -13 / SSH/SFTP Connection +14 / SSH/SFTP Connection \ "sftp" -14 / Yandex Disk +15 / Yandex Disk \ "yandex" -15 / http Connection +16 / http Connection \ "http" -Storage> http -URL of http host to connect to -Choose a number from below, or type in your own value - 1 / Connect to example.com - \ "https://example.com" -url> https://beta.rclone.org +Storage> box +Box App Client Id - leave blank normally. +client_id> +Box App Client Secret - leave blank normally. +client_secret> Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code -------------------- [remote] -url = https://beta.rclone.org +client_id = +client_secret = +token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote -y/e/d> y +y/e/d> y +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your Box

    +
    rclone lsd remote:
    +

    List all the files in your Box

    +
    rclone ls remote:
    +

    To copy a local directory to an Box directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Invalid refresh token

    +

    According to the box docs:

    +
    +

    Each refresh_token is valid for one use in 60 days.

    +
    +

    This means that if you

    + +

    then rclone will return an error which includes the text Invalid refresh token.

    +

    To fix this you will need to use oauth2 again to update the refresh token. You can use the methods in the remote setup docs, bearing in mind that if you use the copy the config file method, you should not use that remote on the computer you did the authentication on.

    +

    Here is how to do it.

    +
    $ rclone config
     Current remotes:
     
     Name                 Type
     ====                 ====
    -remote               http
    +remote               box
     
     e) Edit existing remote
     n) New remote
    @@ -3292,27 +2516,65 @@ r) Rename remote
     c) Copy remote
     s) Set configuration password
     q) Quit config
    -e/n/d/r/c/s/q> q
    -

    This remote is called remote and can now be used like this

    -

    See all the top level directories

    -
    rclone lsd remote:
    -

    List the contents of a directory

    -
    rclone ls remote:directory
    -

    Sync the remote directory to /home/local/directory, deleting any excess files.

    -
    rclone sync remote:directory /home/local/directory
    -

    Read only

    -

    This remote is read only - you can't upload files to an HTTP server.

    -

    Modified time

    -

    Most HTTP servers store time accurate to 1 second.

    -

    Checksum

    -

    No checksums are stored.

    -

    Usage without a config file

    -

    Note that since only two environment variable need to be set, it is easy to use without a config file like this.

    -
    RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz:
    -

    Or if you prefer

    -
    export RCLONE_CONFIG_ZZ_TYPE=http
    -export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org
    -rclone lsd zz:
    +e/n/d/r/c/s/q> e +Choose a number from below, or type in an existing value + 1 > remote +remote> remote +-------------------- +[remote] +type = box +token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} +-------------------- +Edit remote +Value "client_id" = "" +Edit? (y/n)> +y) Yes +n) No +y/n> n +Value "client_secret" = "" +Edit? (y/n)> +y) Yes +n) No +y/n> n +Remote config +Already have a token - refresh? +y) Yes +n) No +y/n> y +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = box +token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +

    Modified time and hashes

    +

    Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    +

    One drive supports SHA1 type hashes, so you can use the --checksum flag.

    +

    Transfers

    +

    For files above 50MB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing --transfers will increase memory use.

    +

    Deleting files

    +

    Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash.

    +

    Specific options

    +

    Here are the command line options specific to this cloud storage system.

    +

    --box-upload-cutoff=SIZE

    +

    Cutoff for switching to chunked upload - must be >= 50MB. The default is 50MB.

    +

    Limitations

    +

    Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    +

    Box file names can't have the \ character in. rclone maps this to and from an identical looking unicode equivalent .

    +

    Box only supports filenames up to 255 characters in length.

    Crypt

    The crypt remote encrypts and decrypts another remote.

    To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.

    @@ -3480,11 +2742,11 @@ $ rclone -q ls secret:

    Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.

    There may be an even more secure file name encryption mode in the future which will address the long file name problem.

    -

    Modified time and hashes

    +

    Modified time and hashes

    Crypt stores modification times using the underlying remote so support depends on that.

    Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

    Note that you should use the rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can't check the checksums properly.

    -

    Specific options

    +

    Specific options

    Here are the command line options specific to this cloud storage system.

    --crypt-show-mapping

    If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.

    @@ -3553,12 +2815,1500 @@ $ rclone -q ls secret:

    Key derivation

    Rclone uses scrypt with parameters N=16384, r=8, p=1 with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.

    scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.

    +

    Dropbox

    +

    Paths are specified as remote:path

    +

    Dropbox paths may be as deep as required, eg remote:directory/subdirectory.

    +

    The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    n) New remote
    +d) Delete remote
    +q) Quit config
    +e/n/d/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 7 / Google Drive
    +   \ "drive"
    + 8 / Hubic
    +   \ "hubic"
    + 9 / Local Disk
    +   \ "local"
    +10 / Microsoft OneDrive
    +   \ "onedrive"
    +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +12 / SSH/SFTP Connection
    +   \ "sftp"
    +13 / Yandex Disk
    +   \ "yandex"
    +Storage> 4
    +Dropbox App Key - leave blank normally.
    +app_key>
    +Dropbox App Secret - leave blank normally.
    +app_secret>
    +Remote config
    +Please visit:
    +https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
    +Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
    +--------------------
    +[remote]
    +app_key =
    +app_secret =
    +token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    You can then use it like this,

    +

    List directories in top level of your dropbox

    +
    rclone lsd remote:
    +

    List all the files in your dropbox

    +
    rclone ls remote:
    +

    To copy a local directory to a dropbox directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Modified time and Hashes

    +

    Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

    +

    This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only or --checksum flag to stop it.

    +

    Dropbox supports its own hash type which is checked for all transfers.

    +

    Specific options

    +

    Here are the command line options specific to this cloud storage system.

    +

    --dropbox-chunk-size=SIZE

    +

    Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.

    +

    Limitations

    +

    Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    +

    There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.

    +

    If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

    +

    FTP

    +

    FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.

    +

    Here is an example of making an FTP configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous as username and your email address as the password.

    +
    No remotes found - make a new one
    +n) New remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +n/r/c/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / FTP Connection 
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 8 / Google Drive
    +   \ "drive"
    + 9 / Hubic
    +   \ "hubic"
    +10 / Local Disk
    +   \ "local"
    +11 / Microsoft OneDrive
    +   \ "onedrive"
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +13 / SSH/SFTP Connection
    +   \ "sftp"
    +14 / Yandex Disk
    +   \ "yandex"
    +Storage> ftp
    +FTP host to connect to
    +Choose a number from below, or type in your own value
    + 1 / Connect to ftp.example.com
    +   \ "ftp.example.com"
    +host> ftp.example.com
    +FTP username, leave blank for current username, ncw
    +user>
    +FTP port, leave blank to use default (21)
    +port>
    +FTP password
    +y) Yes type in my own password
    +g) Generate random password
    +y/g> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Remote config
    +--------------------
    +[remote]
    +host = ftp.example.com
    +user = 
    +port =
    +pass = *** ENCRYPTED ***
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this

    +

    See all directories in the home directory

    +
    rclone lsd remote:
    +

    Make a new directory

    +
    rclone mkdir remote:path/to/directory
    +

    List the contents of a directory

    +
    rclone ls remote:path/to/directory
    +

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    +
    rclone sync /home/local/directory remote:directory
    +

    Modified time

    +

    FTP does not support modified times. Any times you see on the server will be time of upload.

    +

    Checksums

    +

    FTP does not support any checksums.

    +

    Limitations

    +

    Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    +

    Note that --timeout isn't supported (but --contimeout is).

    +

    Note that --bind isn't supported.

    +

    FTP could support server side move but doesn't yet.

    +

    Google Cloud Storage

    +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    +

    The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    n) New remote
    +d) Delete remote
    +q) Quit config
    +e/n/d/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 7 / Google Drive
    +   \ "drive"
    + 8 / Hubic
    +   \ "hubic"
    + 9 / Local Disk
    +   \ "local"
    +10 / Microsoft OneDrive
    +   \ "onedrive"
    +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +12 / SSH/SFTP Connection
    +   \ "sftp"
    +13 / Yandex Disk
    +   \ "yandex"
    +Storage> 6
    +Google Application Client Id - leave blank normally.
    +client_id>
    +Google Application Client Secret - leave blank normally.
    +client_secret>
    +Project number optional - needed only for list/create/delete buckets - see your developer console.
    +project_number> 12345678
    +Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
    +service_account_file>
    +Access Control List for new objects.
    +Choose a number from below, or type in your own value
    + 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
    +   \ "authenticatedRead"
    + 2 / Object owner gets OWNER access, and project team owners get OWNER access.
    +   \ "bucketOwnerFullControl"
    + 3 / Object owner gets OWNER access, and project team owners get READER access.
    +   \ "bucketOwnerRead"
    + 4 / Object owner gets OWNER access [default if left blank].
    +   \ "private"
    + 5 / Object owner gets OWNER access, and project team members get access according to their roles.
    +   \ "projectPrivate"
    + 6 / Object owner gets OWNER access, and all Users get READER access.
    +   \ "publicRead"
    +object_acl> 4
    +Access Control List for new buckets.
    +Choose a number from below, or type in your own value
    + 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
    +   \ "authenticatedRead"
    + 2 / Project team owners get OWNER access [default if left blank].
    +   \ "private"
    + 3 / Project team members get access according to their roles.
    +   \ "projectPrivate"
    + 4 / Project team owners get OWNER access, and all Users get READER access.
    +   \ "publicRead"
    + 5 / Project team owners get OWNER access, and all Users get WRITER access.
    +   \ "publicReadWrite"
    +bucket_acl> 2
    +Location for the newly created buckets.
    +Choose a number from below, or type in your own value
    + 1 / Empty for default location (US).
    +   \ ""
    + 2 / Multi-regional location for Asia.
    +   \ "asia"
    + 3 / Multi-regional location for Europe.
    +   \ "eu"
    + 4 / Multi-regional location for United States.
    +   \ "us"
    + 5 / Taiwan.
    +   \ "asia-east1"
    + 6 / Tokyo.
    +   \ "asia-northeast1"
    + 7 / Singapore.
    +   \ "asia-southeast1"
    + 8 / Sydney.
    +   \ "australia-southeast1"
    + 9 / Belgium.
    +   \ "europe-west1"
    +10 / London.
    +   \ "europe-west2"
    +11 / Iowa.
    +   \ "us-central1"
    +12 / South Carolina.
    +   \ "us-east1"
    +13 / Northern Virginia.
    +   \ "us-east4"
    +14 / Oregon.
    +   \ "us-west1"
    +location> 12
    +The storage class to use when storing objects in Google Cloud Storage.
    +Choose a number from below, or type in your own value
    + 1 / Default
    +   \ ""
    + 2 / Multi-regional storage class
    +   \ "MULTI_REGIONAL"
    + 3 / Regional storage class
    +   \ "REGIONAL"
    + 4 / Nearline storage class
    +   \ "NEARLINE"
    + 5 / Coldline storage class
    +   \ "COLDLINE"
    + 6 / Durable reduced availability storage class
    +   \ "DURABLE_REDUCED_AVAILABILITY"
    +storage_class> 5
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine or Y didn't work
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[remote]
    +type = google cloud storage
    +client_id =
    +client_secret =
    +token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
    +project_number = 12345678
    +object_acl = private
    +bucket_acl = private
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

    +

    This remote is called remote and can now be used like this

    +

    See all the buckets in your project

    +
    rclone lsd remote:
    +

    Make a new bucket

    +
    rclone mkdir remote:bucket
    +

    List the contents of a bucket

    +
    rclone ls remote:bucket
    +

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    +
    rclone sync /home/local/directory remote:bucket
    +

    Service Account support

    +

    You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

    +

    To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

    +

    To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.

    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    Modified time

    +

    Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.

    +

    Google Drive

    +

    Paths are specified as drive:path

    +

    Drive paths may be as deep as required, eg drive:directory/subdirectory.

    +

    The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +n/r/c/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / FTP Connection
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 8 / Google Drive
    +   \ "drive"
    + 9 / Hubic
    +   \ "hubic"
    +10 / Local Disk
    +   \ "local"
    +11 / Microsoft OneDrive
    +   \ "onedrive"
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +13 / SSH/SFTP Connection
    +   \ "sftp"
    +14 / Yandex Disk
    +   \ "yandex"
    +Storage> 8
    +Google Application Client Id - leave blank normally.
    +client_id>
    +Google Application Client Secret - leave blank normally.
    +client_secret>
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine or Y didn't work
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +Configure this as a team drive?
    +y) Yes
    +n) No
    +y/n> n
    +--------------------
    +[remote]
    +client_id =
    +client_secret =
    +token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

    +

    You can then use it like this,

    +

    List directories in top level of your drive

    +
    rclone lsd remote:
    +

    List all the files in your drive

    +
    rclone ls remote:
    +

    To copy a local directory to a drive directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Team drives

    +

    If you want to configure the remote to point to a Google Team Drive then answer y to the question Configure this as a team drive?.

    +

    This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.

    +

    For example:

    +
    Configure this as a team drive?
    +y) Yes
    +n) No
    +y/n> y
    +Fetching team drive list...
    +Choose a number from below, or type in your own value
    + 1 / Rclone Test
    +   \ "xxxxxxxxxxxxxxxxxxxx"
    + 2 / Rclone Test 2
    +   \ "yyyyyyyyyyyyyyyyyyyy"
    + 3 / Rclone Test 3
    +   \ "zzzzzzzzzzzzzzzzzzzz"
    +Enter a Team Drive ID> 1
    +--------------------
    +[remote]
    +client_id =
    +client_secret =
    +token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
    +team_drive = xxxxxxxxxxxxxxxxxxxx
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Modified time

    +

    Google drive stores modification times accurate to 1 ms.

    +

    Revisions

    +

    Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file.

    +

    Revisions follow the standard google policy which at time of writing was

    + +

    Deleting files

    +

    By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false flag, or set the equivalent environment variable.

    +

    Emptying trash

    +

    If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

    +

    Specific options

    +

    Here are the command line options specific to this cloud storage system.

    +

    --drive-auth-owner-only

    +

    Only consider files owned by the authenticated user.

    +

    --drive-chunk-size=SIZE

    +

    Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.

    +

    Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.

    +

    Reducing this will reduce memory usage but decrease performance.

    +

    --drive-formats

    +

    Google documents can only be exported from Google drive. When rclone downloads a Google doc it chooses a format to download depending upon this setting.

    +

    By default the formats are docx,xlsx,pptx,svg which are a sensible default for an editable document.

    +

    When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.

    +

    If you prefer an archive copy then you might use --drive-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt,odp.

    +

    Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.

    +

    Here are the possible extensions with their corresponding mime types.

    + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ExtensionMime TypeDescription
    csvtext/csvStandard CSV format for Spreadsheets
    docapplication/mswordMicosoft Office Document
    docxapplication/vnd.openxmlformats-officedocument.wordprocessingml.documentMicrosoft Office Document
    epubapplication/epub+zipE-book format
    htmltext/htmlAn HTML Document
    jpgimage/jpegA JPEG Image File
    odpapplication/vnd.oasis.opendocument.presentationOpenoffice Presentation
    odsapplication/vnd.oasis.opendocument.spreadsheetOpenoffice Spreadsheet
    odsapplication/x-vnd.oasis.opendocument.spreadsheetOpenoffice Spreadsheet
    odtapplication/vnd.oasis.opendocument.textOpenoffice Document
    pdfapplication/pdfAdobe PDF Format
    pngimage/pngPNG Image Format
    pptxapplication/vnd.openxmlformats-officedocument.presentationml.presentationMicrosoft Office Powerpoint
    rtfapplication/rtfRich Text Format
    svgimage/svg+xmlScalable Vector Graphics Format
    tsvtext/tab-separated-valuesStandard TSV format for spreadsheets
    txttext/plainPlain Text
    xlsapplication/vnd.ms-excelMicrosoft Office Spreadsheet
    xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheetMicrosoft Office Spreadsheet
    zipapplication/zipA ZIP file of HTML, Images CSS
    +

    --drive-list-chunk int

    +

    Size of listing chunk 100-1000. 0 to disable. (default 1000)

    +

    --drive-shared-with-me

    +

    Only show files that are shared with me

    +

    --drive-skip-gdocs

    +

    Skip google documents in all listings. If given, gdocs practically become invisible to rclone.

    +

    --drive-trashed-only

    +

    Only show files that are in the trash. This will show trashed files in their original directory structure.

    +

    --drive-upload-cutoff=SIZE

    +

    File size cutoff for switching to chunked upload. Default is 8 MB.

    +

    --drive-use-trash

    +

    Controls whether files are sent to the trash or deleted permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false to delete files permanently instead.

    +

    Limitations

    +

    Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.

    +

    Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with --disable copy to download and upload the files if you prefer.

    +

    Duplicated files

    +

    Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.

    +

    Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

    +

    Use rclone dedupe to fix duplicated files.

    +

    Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.

    +

    Rclone appears to be re-copying files it shouldn't

    +

    There are two possible reasons for rclone to recopy files which haven't changed to Google Drive.

    +

    The first is the duplicated file issue above - run rclone dedupe and check your logs for duplicate object or directory messages.

    +

    The second is that sometimes Google reports different sizes for the Google Docs exports which will cause rclone to re-download Google Docs for no apparent reason. --ignore-size is a not very satisfactory work-around for this if it is causing you a lot of problems.

    +

    Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y"

    +

    This is the same problem as above. Google reports the google doc is one size, but rclone downloads a different size. Work-around with the --ignore-size flag or wait for rclone to retry the download which it will.

    +

    Making your own client_id

    +

    When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.

    +

    However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.

    +

    Here is how to create your own Google Drive client ID for rclone:

    +
      +
    1. Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)

    2. +
    3. Select a project or create a new project.

    4. +
    5. Under Overview, Google APIs, Google Apps APIs, click "Drive API", then "Enable".

    6. +
    7. Click "Credentials" in the left-side panel (not "Go to credentials", which opens the wizard), then "Create credentials", then "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.

    8. +
    9. Choose an application type of "other", and click "Create". (the default name is fine)

    10. +
    11. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.

    12. +
    +

    (Thanks to @balazer on github for these instructions.)

    +

    HTTP

    +

    The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)

    +

    Paths are specified as remote: or remote:path/to/dir.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / FTP Connection
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 8 / Google Drive
    +   \ "drive"
    + 9 / Hubic
    +   \ "hubic"
    +10 / Local Disk
    +   \ "local"
    +11 / Microsoft OneDrive
    +   \ "onedrive"
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +13 / SSH/SFTP Connection
    +   \ "sftp"
    +14 / Yandex Disk
    +   \ "yandex"
    +15 / http Connection
    +   \ "http"
    +Storage> http
    +URL of http host to connect to
    +Choose a number from below, or type in your own value
    + 1 / Connect to example.com
    +   \ "https://example.com"
    +url> https://beta.rclone.org
    +Remote config
    +--------------------
    +[remote]
    +url = https://beta.rclone.org
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +Current remotes:
    +
    +Name                 Type
    +====                 ====
    +remote               http
    +
    +e) Edit existing remote
    +n) New remote
    +d) Delete remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/r/c/s/q> q
    +

    This remote is called remote and can now be used like this

    +

    See all the top level directories

    +
    rclone lsd remote:
    +

    List the contents of a directory

    +
    rclone ls remote:directory
    +

    Sync the remote directory to /home/local/directory, deleting any excess files.

    +
    rclone sync remote:directory /home/local/directory
    +

    Read only

    +

    This remote is read only - you can't upload files to an HTTP server.

    +

    Modified time

    +

    Most HTTP servers store time accurate to 1 second.

    +

    Checksum

    +

    No checksums are stored.

    +

    Usage without a config file

    +

    Note that since only two environment variable need to be set, it is easy to use without a config file like this.

    +
    RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz:
    +

    Or if you prefer

    +
    export RCLONE_CONFIG_ZZ_TYPE=http
    +export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org
    +rclone lsd zz:
    +

    Hubic

    +

    Paths are specified as remote:path

    +

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

    +

    The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    n) New remote
    +s) Set configuration password
    +n/s> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 7 / Google Drive
    +   \ "drive"
    + 8 / Hubic
    +   \ "hubic"
    + 9 / Local Disk
    +   \ "local"
    +10 / Microsoft OneDrive
    +   \ "onedrive"
    +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +12 / SSH/SFTP Connection
    +   \ "sftp"
    +13 / Yandex Disk
    +   \ "yandex"
    +Storage> 8
    +Hubic Client Id - leave blank normally.
    +client_id>
    +Hubic Client Secret - leave blank normally.
    +client_secret>
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[remote]
    +client_id =
    +client_secret =
    +token = {"access_token":"XXXXXX"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    +

    Once configured you can then use rclone like this,

    +

    List containers in the top level of your Hubic

    +
    rclone lsd remote:
    +

    List all the files in your Hubic

    +
    rclone ls remote:
    +

    To copy a local directory to an Hubic directory called backup

    +
    rclone copy /home/source remote:backup
    +

    If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default directory

    +
    rclone copy /home/source remote:default/backup
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    Modified time

    +

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    +

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    +

    Note that Hubic wraps the Swift backend, so most of the properties of are the same.

    +

    Limitations

    +

    This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

    +

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    +

    Microsoft Azure Blob Storage

    +

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

    +

    Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Box
    +   \ "box"
    + 5 / Dropbox
    +   \ "dropbox"
    + 6 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 7 / FTP Connection
    +   \ "ftp"
    + 8 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 9 / Google Drive
    +   \ "drive"
    +10 / Hubic
    +   \ "hubic"
    +11 / Local Disk
    +   \ "local"
    +12 / Microsoft Azure Blob Storage
    +   \ "azureblob"
    +13 / Microsoft OneDrive
    +   \ "onedrive"
    +14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +15 / SSH/SFTP Connection
    +   \ "sftp"
    +16 / Yandex Disk
    +   \ "yandex"
    +17 / http Connection
    +   \ "http"
    +Storage> azureblob
    +Storage Account Name
    +account> account_name
    +Storage Account Key
    +key> base64encodedkey==
    +Endpoint for the service - leave blank normally.
    +endpoint> 
    +Remote config
    +--------------------
    +[remote]
    +account = account_name
    +key = base64encodedkey==
    +endpoint = 
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    See all containers

    +
    rclone lsd remote:
    +

    Make a new container

    +
    rclone mkdir remote:container
    +

    List the contents of a container

    +
    rclone ls remote:container
    +

    Sync /home/local/directory to the remote container, deleting any excess files in the container.

    +
    rclone sync /home/local/directory remote:container
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    Modified time

    +

    The modified time is stored as metadata on the object with the mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.

    +

    Hashes

    +

    MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk.

    +

    Multipart uploads

    +

    Rclone supports multipart uploads with Azure Blob storage. Files bigger than 256MB will be uploaded using chunked upload by default.

    +

    The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to --transfers of them being uploaded at once.

    +

    Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M.

    +

    Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks.

    +

    Specific options

    +

    Here are the command line options specific to this cloud storage system.

    +

    --azureblob-upload-cutoff=SIZE

    +

    Cutoff for switching to chunked upload - must be <= 256MB. The default is 256MB.

    +

    --azureblob-chunk-size=SIZE

    +

    Upload chunk size. Default 4MB. Note that this is stored in memory and there may be up to --transfers chunks stored at once in memory. This can be at most 100MB.

    +

    Limitations

    +

    MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

    +

    Microsoft OneDrive

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, eg remote:directory/subdirectory.

    +

    The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +n/s> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 7 / Google Drive
    +   \ "drive"
    + 8 / Hubic
    +   \ "hubic"
    + 9 / Local Disk
    +   \ "local"
    +10 / Microsoft OneDrive
    +   \ "onedrive"
    +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +12 / SSH/SFTP Connection
    +   \ "sftp"
    +13 / Yandex Disk
    +   \ "yandex"
    +Storage> 10
    +Microsoft App Client Id - leave blank normally.
    +client_id>
    +Microsoft App Client Secret - leave blank normally.
    +client_secret>
    +Remote config
    +Choose OneDrive account type?
    + * Say b for a OneDrive business account
    + * Say p for a personal OneDrive account
    +b) Business
    +p) Personal
    +b/p> p
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[remote]
    +client_id =
    +client_secret =
    +token = {"access_token":"XXXXXX"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your OneDrive

    +
    rclone lsd remote:
    +

    List all the files in your OneDrive

    +
    rclone ls remote:
    +

    To copy a local directory to an OneDrive directory called backup

    +
    rclone copy /home/source remote:backup
    +

    OneDrive for Business

    +

    There is additional support for OneDrive for Business. Select "b" when ask

    +
    Choose OneDrive account type?
    + * Say b for a OneDrive business account
    + * Say p for a personal OneDrive account
    +b) Business
    +p) Personal
    +b/p> 
    +

    After that rclone requires an authentication of your account. The application will first authenticate your account, then query the OneDrive resource URL and do a second (silent) authentication for this resource URL.

    +

    Modified time and hashes

    +

    OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    +

    One drive supports SHA1 type hashes, so you can use --checksum flag.

    +

    Deleting files

    +

    Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

    +

    Specific options

    +

    Here are the command line options specific to this cloud storage system.

    +

    --onedrive-chunk-size=SIZE

    +

    Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.

    +

    --onedrive-upload-cutoff=SIZE

    +

    Cutoff for switching to chunked upload - must be <= 100MB. The default is 10MB.

    +

    Limitations

    +

    Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    +

    There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    +

    The largest allowed file size is 10GiB (10,737,418,240 bytes).

    +

    QingStor

    +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    +

    Here is an example of making an QingStor configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found - make a new one
    +n) New remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +n/r/c/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / FTP Connection
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 8 / Google Drive
    +   \ "drive"
    + 9 / Hubic
    +   \ "hubic"
    +10 / Local Disk
    +   \ "local"
    +11 / Microsoft OneDrive
    +   \ "onedrive"
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +13 / QingStor Object Storage
    +   \ "qingstor"
    +14 / SSH/SFTP Connection
    +   \ "sftp"
    +15 / Yandex Disk
    +   \ "yandex"
    +Storage> 13
    +Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own value
    + 1 / Enter QingStor credentials in the next step
    +   \ "false"
    + 2 / Get QingStor credentials from the environment (env vars or IAM)
    +   \ "true"
    +env_auth> 1
    +QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
    +access_key_id> access_key
    +QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    +secret_access_key> secret_key
    +Enter a endpoint URL to connection QingStor API.
    +Leave blank will use the default value "https://qingstor.com:443"
    +endpoint>
    +Zone connect to. Default is "pek3a".
    +Choose a number from below, or type in your own value
    +   / The Beijing (China) Three Zone
    + 1 | Needs location constraint pek3a.
    +   \ "pek3a"
    +   / The Shanghai (China) First Zone
    + 2 | Needs location constraint sh1a.
    +   \ "sh1a"
    +zone> 1
    +Number of connnection retry.
    +Leave blank will use the default value "3".
    +connection_retries>
    +Remote config
    +--------------------
    +[remote]
    +env_auth = false
    +access_key_id = access_key
    +secret_access_key = secret_key
    +endpoint =
    +zone = pek3a
    +connection_retries =
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this

    +

    See all buckets

    +
    rclone lsd remote:
    +

    Make a new bucket

    +
    rclone mkdir remote:bucket
    +

    List the contents of a bucket

    +
    rclone ls remote:bucket
    +

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    +
    rclone sync /home/local/directory remote:bucket
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    Multipart uploads

    +

    rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.

    +

    Buckets and Zone

    +

    With QingStor you can list buckets (rclone lsd) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone.

    +

    Authentication

    +

    There are two ways to supply rclone with a set of QingStor credentials. In order of precedence:

    + +

    Swift

    +

    Swift refers to Openstack Object Storage. Commercial implementations of that being:

    + +

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.

    +

    Here is an example of making a swift configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Box
    +   \ "box"
    + 5 / Dropbox
    +   \ "dropbox"
    + 6 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 7 / FTP Connection
    +   \ "ftp"
    + 8 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 9 / Google Drive
    +   \ "drive"
    +10 / Hubic
    +   \ "hubic"
    +11 / Local Disk
    +   \ "local"
    +12 / Microsoft Azure Blob Storage
    +   \ "azureblob"
    +13 / Microsoft OneDrive
    +   \ "onedrive"
    +14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +15 / QingClound Object Storage
    +   \ "qingstor"
    +16 / SSH/SFTP Connection
    +   \ "sftp"
    +17 / Yandex Disk
    +   \ "yandex"
    +18 / http Connection
    +   \ "http"
    +Storage> swift
    +Get swift credentials from environment variables in standard OpenStack form.
    +Choose a number from below, or type in your own value
    + 1 / Enter swift credentials in the next step
    +   \ "false"
    + 2 / Get swift credentials from environment vars. Leave other fields blank if using this.
    +   \ "true"
    +env_auth> 1
    +User name to log in.
    +user> user_name
    +API key or password.
    +key> password_or_api_key
    +Authentication URL for server.
    +Choose a number from below, or type in your own value
    + 1 / Rackspace US
    +   \ "https://auth.api.rackspacecloud.com/v1.0"
    + 2 / Rackspace UK
    +   \ "https://lon.auth.api.rackspacecloud.com/v1.0"
    + 3 / Rackspace v2
    +   \ "https://identity.api.rackspacecloud.com/v2.0"
    + 4 / Memset Memstore UK
    +   \ "https://auth.storage.memset.com/v1.0"
    + 5 / Memset Memstore UK v2
    +   \ "https://auth.storage.memset.com/v2.0"
    + 6 / OVH
    +   \ "https://auth.cloud.ovh.net/v2.0"
    +auth> 1
    +User domain - optional (v3 auth)
    +domain> Default
    +Tenant name - optional for v1 auth, required otherwise
    +tenant> tenant_name
    +Tenant domain - optional (v3 auth)
    +tenant_domain>
    +Region name - optional
    +region>
    +Storage URL - optional
    +storage_url>
    +AuthVersion - optional - set to (1,2,3) if your auth URL has no version
    +auth_version>
    +Endpoint type to choose from the service catalogue
    +Choose a number from below, or type in your own value
    + 1 / Public (default, choose this if not sure)
    +   \ "public"
    + 2 / Internal (use internal service net)
    +   \ "internal"
    + 3 / Admin
    +   \ "admin"
    +endpoint_type>
    +Remote config
    +--------------------
    +[remote]
    +env_auth = false
    +user = user_name
    +key = password_or_api_key
    +auth = https://auth.api.rackspacecloud.com/v1.0
    +domain = Default
    +tenant =
    +tenant_domain =
    +region =
    +storage_url =
    +auth_version =
    +endpoint_type =
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this

    +

    See all containers

    +
    rclone lsd remote:
    +

    Make a new container

    +
    rclone mkdir remote:container
    +

    List the contents of a container

    +
    rclone ls remote:container
    +

    Sync /home/local/directory to the remote container, deleting any excess files in the container.

    +
    rclone sync /home/local/directory remote:container
    +

    Configuration from an Openstack credentials file

    +

    An Opentstack credentials file typically looks something something like this (without the comments)

    +
    export OS_AUTH_URL=https://a.provider.net/v2.0
    +export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
    +export OS_TENANT_NAME="1234567890123456"
    +export OS_USERNAME="123abc567xy"
    +echo "Please enter your OpenStack Password: "
    +read -sr OS_PASSWORD_INPUT
    +export OS_PASSWORD=$OS_PASSWORD_INPUT
    +export OS_REGION_NAME="SBG1"
    +if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
    +

    The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above.

    +
    [remote]
    +type = swift
    +user = $OS_USERNAME
    +key = $OS_PASSWORD
    +auth = $OS_AUTH_URL
    +tenant = $OS_TENANT_NAME
    +

    Note that you may (or may not) need to set region too - try without first.

    +

    Configuration from the environment

    +

    If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables.

    +

    When you run through the config, make sure you choose true for env_auth and leave everything else blank.

    +

    rclone will then set any empty config parameters from the enviroment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.

    +

    Using rclone without a config file

    +

    You can use rclone with swift without a config file, if desired, like this:

    +
    source openstack-credentials-file
    +export RCLONE_CONFIG_MYREMOTE_TYPE=swift
    +export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
    +rclone lsd myremote:
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    Specific options

    +

    Here are the command line options specific to this cloud storage system.

    +

    --swift-chunk-size=SIZE

    +

    Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.

    +

    Modified time

    +

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    +

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    +

    Limitations

    +

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    +

    Troubleshooting

    +

    Rclone gives Failed to create file system for "remote:": Bad Request

    +

    Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.

    +

    So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.

    +

    This may also be caused by specifying the region when you shouldn't have (eg OVH).

    +

    Rclone gives Failed to create file system: Response didn't have storage storage url and auth token

    +

    This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

    +

    SFTP

    +

    SFTP is the Secure (or SSH) File Transfer Protocol.

    +

    It runs over SSH v2 and is standard with most modern SSH installations.

    +

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the users home directory.

    +

    Here is an example of making a SFTP configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / FTP Connection
    +   \ "ftp"
    + 7 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 8 / Google Drive
    +   \ "drive"
    + 9 / Hubic
    +   \ "hubic"
    +10 / Local Disk
    +   \ "local"
    +11 / Microsoft OneDrive
    +   \ "onedrive"
    +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +13 / SSH/SFTP Connection
    +   \ "sftp"
    +14 / Yandex Disk
    +   \ "yandex"
    +15 / http Connection
    +   \ "http"
    +Storage> sftp
    +SSH host to connect to
    +Choose a number from below, or type in your own value
    + 1 / Connect to example.com
    +   \ "example.com"
    +host> example.com
    +SSH username, leave blank for current username, ncw
    +user> sftpuser
    +SSH port, leave blank to use default (22)
    +port> 
    +SSH password, leave blank to use ssh-agent.
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank
    +y/g/n> n
    +Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
    +key_file> 
    +Remote config
    +--------------------
    +[remote]
    +host = example.com
    +user = sftpuser
    +port = 
    +pass = 
    +key_file = 
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this

    +

    See all directories in the home directory

    +
    rclone lsd remote:
    +

    Make a new directory

    +
    rclone mkdir remote:path/to/directory
    +

    List the contents of a directory

    +
    rclone ls remote:path/to/directory
    +

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    +
    rclone sync /home/local/directory remote:directory
    +

    SSH Authentication

    +

    The SFTP remote supports 3 authentication methods

    + +

    Key files should be unencrypted PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa.

    +

    If you don't specify pass or key_file then it will attempt to contact an ssh-agent.

    +

    ssh-agent on macOS

    +

    Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg

    +
    eval `ssh-agent -s` && ssh-add -A
    +

    And then at the end of the session

    +
    eval `ssh-agent -k`
    +

    These commands can be used in scripts of course.

    +

    Modified time

    +

    Modified times are stored on the server to 1 second precision.

    +

    Modified times are used in syncing and are fully supported.

    +

    Limitations

    +

    SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

    +

    The only ssh agent supported under Windows is Putty's pagent.

    +

    SFTP isn't supported under plan9 until this issue is fixed.

    +

    Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    +

    Note that --timeout isn't supported (but --contimeout is).

    +

    Yandex Disk

    +

    Yandex Disk is a cloud storage solution created by Yandex.

    +

    Yandex paths may be as deep as required, eg remote:directory/subdirectory.

    +

    Here is an example of making a yandex configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +n/s> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Amazon Drive
    +   \ "amazon cloud drive"
    + 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 3 / Backblaze B2
    +   \ "b2"
    + 4 / Dropbox
    +   \ "dropbox"
    + 5 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 6 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    + 7 / Google Drive
    +   \ "drive"
    + 8 / Hubic
    +   \ "hubic"
    + 9 / Local Disk
    +   \ "local"
    +10 / Microsoft OneDrive
    +   \ "onedrive"
    +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +12 / SSH/SFTP Connection
    +   \ "sftp"
    +13 / Yandex Disk
    +   \ "yandex"
    +Storage> 13
    +Yandex Client Id - leave blank normally.
    +client_id>
    +Yandex Client Secret - leave blank normally.
    +client_secret>
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[remote]
    +client_id =
    +client_secret =
    +token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    +

    Once configured you can then use rclone like this,

    +

    See top level directories

    +
    rclone lsd remote:
    +

    Make a new directory

    +
    rclone mkdir remote:directory
    +

    List the contents of a directory

    +
    rclone ls remote:directory
    +

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    +
    rclone sync /home/local/directory remote:directory
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    Modified time

    +

    Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

    +

    MD5 checksums

    +

    MD5 checksums are natively supported by Yandex Disk.

    +

    Emptying Trash

    +

    If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

    Local Filesystem

    Local paths are specified as normal filesystem paths, eg /path/to/wherever, so

    rclone sync /home/source /tmp/destination

    Will sync /home/source to /tmp/destination

    These can be configured into the config file for consistencies sake, but it is probably easier not to.

    -

    Modified time

    +

    Modified time

    Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

    Filenames

    Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.

    @@ -3578,7 +4328,7 @@ nounc = true

    And use rclone like this:

    rclone copy c:\src nounc:z:\dst

    This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.

    -

    Specific options

    +

    Specific options

    Here are the command line options specific to local storage

    Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).

    @@ -3603,10 +4353,8 @@ nounc = true 6 two/three 6 b/two 6 b/one -

    --no-local-unicode-normalization

    -

    By default rclone normalizes (NFC) the unicode representation of filenames and directories. This flag disables that normalization and uses the same representation as the local filesystem.

    -

    This can be useful if you need to retain the local unicode representation and you are using a cloud provider which supports unnormalized names (e.g. S3 or ACD).

    -

    This should also work with any provider if you are using crypt and have file name encryption (the default) or obfuscation turned on.

    +

    --local-no-unicode-normalization

    +

    This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.

    --one-file-system, -x

    This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.

    For example if you have a directory heirachy like this

    @@ -3628,8 +4376,78 @@ nounc = true 0 file2

    NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.

    NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag.

    + +

    This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.

    Changelog

    Contact the rclone project

    Forum

    diff --git a/MANUAL.md b/MANUAL.md index 83eac85df..b0906a083 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,28 +1,39 @@ % rclone(1) User Manual % Nick Craig-Wood -% Jul 22, 2017 +% Sep 30, 2017 Rclone ====== [![Logo](https://rclone.org/img/rclone-120x120.png)](https://rclone.org/) -Rclone is a command line program to sync files and directories to and from +Rclone is a command line program to sync files and directories to and from: - * Google Drive - * Amazon S3 - * Openstack Swift / Rackspace cloud files / Memset Memstore - * Dropbox - * Google Cloud Storage - * Amazon Drive - * Microsoft OneDrive - * Hubic - * Backblaze B2 - * Yandex Disk - * SFTP - * FTP - * HTTP - * The local filesystem +* Amazon Drive +* Amazon S3 +* Backblaze B2 +* Box +* Ceph +* Dreamhost +* Dropbox +* FTP +* Google Cloud Storage +* Google Drive +* HTTP +* Hubic +* Memset Memstore +* Microsoft Azure Blob Storage +* Microsoft OneDrive +* Minio +* OVH +* Openstack Swift +* Oracle Cloud Storage +* QingStor +* Rackspace Cloud Files +* SFTP +* Wasabi +* Yandex Disk +* The local filesystem Features @@ -95,8 +106,11 @@ Unzip the download and cd to the extracted folder. Move rclone to your $PATH. You will be prompted for your password. + sudo mkdir -p /usr/local/bin sudo mv rclone /usr/local/bin/ +(the `mkdir` command is safe to run, even if the directory already exists). + Remove the leftover files. cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip @@ -134,60 +148,6 @@ Instructions - rclone ``` -## Installation with snap ## - -### Quickstart ### - - * install Snapd on your distro using the instructions below - * sudo snap install rclone --classic - * Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. - -See below for how to install snapd if it isn't already installed - -#### Arch #### - - sudo pacman -S snapd - -enable the snapd systemd service: - - sudo systemctl enable --now snapd.socket - -#### Debian / Ubuntu #### - - sudo apt install snapd - -#### Fedora #### - - sudo dnf copr enable zyga/snapcore - sudo dnf install snapd - -enable the snapd systemd service: - - sudo systemctl enable --now snapd.service - -SELinux support is in beta, so currently: - - sudo setenforce 0 - -to persist, edit `/etc/selinux/config` to set `SELINUX=permissive` and reboot. - -#### Gentoo #### - -Install the [gentoo-snappy overlay](https://github.com/zyga/gentoo-snappy). - -#### OpenEmbedded/Yocto #### - -Install the [snap meta layer](https://github.com/morphis/meta-snappy/blob/master/README.md). - -#### openSUSE #### - - sudo zypper addrepo https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy - sudo zypper install snapd - -#### OpenWrt #### - -Enable the snap-openwrt feed. - Configure --------- @@ -203,21 +163,24 @@ option: See the following for detailed instructions for - * [Google Drive](https://rclone.org/drive/) - * [Amazon S3](https://rclone.org/s3/) - * [Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/) - * [Dropbox](https://rclone.org/dropbox/) - * [Google Cloud Storage](https://rclone.org/googlecloudstorage/) - * [Local filesystem](https://rclone.org/local/) * [Amazon Drive](https://rclone.org/amazonclouddrive/) + * [Amazon S3](https://rclone.org/s3/) * [Backblaze B2](https://rclone.org/b2/) - * [Hubic](https://rclone.org/hubic/) - * [Microsoft OneDrive](https://rclone.org/onedrive/) - * [Yandex Disk](https://rclone.org/yandex/) - * [SFTP](https://rclone.org/sftp/) - * [FTP](https://rclone.org/ftp/) - * [HTTP](https://rclone.org/http/) + * [Box](https://rclone.org/box/) * [Crypt](https://rclone.org/crypt/) - to encrypt other remotes + * [Dropbox](https://rclone.org/dropbox/) + * [FTP](https://rclone.org/ftp/) + * [Google Cloud Storage](https://rclone.org/googlecloudstorage/) + * [Google Drive](https://rclone.org/drive/) + * [HTTP](https://rclone.org/http/) + * [Hubic](https://rclone.org/hubic/) + * [Microsoft Azure Blob Storage](https://rclone.org/azureblob/) + * [Microsoft OneDrive](https://rclone.org/onedrive/) + * [Openstack Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/) + * [QingStor](https://rclone.org/qingstor/) + * [SFTP](https://rclone.org/sftp/) + * [Yandex Disk](https://rclone.org/yandex/) + * [The local filesystem](https://rclone.org/local/) Usage ----- @@ -250,10 +213,26 @@ Enter an interactive configuration session. ### Synopsis -Enter an interactive configuration session. +`rclone config` + enters an interactive configuration sessions where you can setup +new remotes and manage existing ones. You may also set or remove a password to +protect your configuration. + +Additional functions: + + * `rclone config edit` – same as above + * `rclone config file` – show path of configuration file in use + * `rclone config show` – print (decrypted) config file + ``` -rclone config +rclone config [function] [flags] +``` + +### Options + +``` + -h, --help help for config ``` ## rclone copy @@ -305,7 +284,13 @@ the destination directory or not. ``` -rclone copy source:path dest:path +rclone copy source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for copy ``` ## rclone sync @@ -337,7 +322,13 @@ go there. ``` -rclone sync source:path dest:path +rclone sync source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for sync ``` ## rclone move @@ -367,7 +358,13 @@ into `dest:path` then delete the original (if no errors on copy) in ``` -rclone move source:path dest:path +rclone move source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for move ``` ## rclone delete @@ -397,7 +394,13 @@ delete all files bigger than 100MBytes. ``` -rclone delete remote:path +rclone delete remote:path [flags] +``` + +### Options + +``` + -h, --help help for delete ``` ## rclone purge @@ -414,7 +417,13 @@ you want to selectively delete files. ``` -rclone purge remote:path +rclone purge remote:path [flags] +``` + +### Options + +``` + -h, --help help for purge ``` ## rclone mkdir @@ -427,7 +436,13 @@ Make the path if it doesn't already exist. Make the path if it doesn't already exist. ``` -rclone mkdir remote:path +rclone mkdir remote:path [flags] +``` + +### Options + +``` + -h, --help help for mkdir ``` ## rclone rmdir @@ -442,7 +457,13 @@ Remove the path. Note that you can't remove a path with objects in it, use purge for that. ``` -rclone rmdir remote:path +rclone rmdir remote:path [flags] +``` + +### Options + +``` + -h, --help help for rmdir ``` ## rclone check @@ -474,6 +495,7 @@ rclone check source:path dest:path [flags] ``` --download Check by downloading rather than with hash. + -h, --help help for check ``` ## rclone ls @@ -486,7 +508,13 @@ List all the objects in the path with size and path. List all the objects in the path with size and path. ``` -rclone ls remote:path +rclone ls remote:path [flags] +``` + +### Options + +``` + -h, --help help for ls ``` ## rclone lsd @@ -499,7 +527,13 @@ List all directories/containers/buckets in the path. List all directories/containers/buckets in the path. ``` -rclone lsd remote:path +rclone lsd remote:path [flags] +``` + +### Options + +``` + -h, --help help for lsd ``` ## rclone lsl @@ -512,7 +546,13 @@ List all the objects path with modification time, size and path. List all the objects path with modification time, size and path. ``` -rclone lsl remote:path +rclone lsl remote:path [flags] +``` + +### Options + +``` + -h, --help help for lsl ``` ## rclone md5sum @@ -528,7 +568,13 @@ is in the same format as the standard md5sum tool produces. ``` -rclone md5sum remote:path +rclone md5sum remote:path [flags] +``` + +### Options + +``` + -h, --help help for md5sum ``` ## rclone sha1sum @@ -544,7 +590,13 @@ is in the same format as the standard sha1sum tool produces. ``` -rclone sha1sum remote:path +rclone sha1sum remote:path [flags] +``` + +### Options + +``` + -h, --help help for sha1sum ``` ## rclone size @@ -557,7 +609,13 @@ Prints the total size and number of objects in remote:path. Prints the total size and number of objects in remote:path. ``` -rclone size remote:path +rclone size remote:path [flags] +``` + +### Options + +``` + -h, --help help for size ``` ## rclone version @@ -570,7 +628,13 @@ Show the version number. Show the version number. ``` -rclone version +rclone version [flags] +``` + +### Options + +``` + -h, --help help for version ``` ## rclone cleanup @@ -586,7 +650,13 @@ versions. Not supported by all remotes. ``` -rclone cleanup remote:path +rclone cleanup remote:path [flags] +``` + +### Options + +``` + -h, --help help for cleanup ``` ## rclone dedupe @@ -597,10 +667,14 @@ Interactively find duplicate files delete/rename them. -By default `dedup` interactively finds duplicate files and offers to +By default `dedupe` interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names. +In the first pass it will merge directories with the same name. It +will do this iteratively until all the identical directories have been +merged. + The `dedupe` command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the `dedupe` command will not be interactive. You @@ -681,6 +755,7 @@ rclone dedupe [mode] remote:path [flags] ``` --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive") + -h, --help help for dedupe ``` ## rclone authorize @@ -696,7 +771,13 @@ rclone from a machine with a browser - use as instructed by rclone config. ``` -rclone authorize +rclone authorize [flags] +``` + +### Options + +``` + -h, --help help for authorize ``` ## rclone cat @@ -737,6 +818,7 @@ rclone cat remote:path [flags] --count int Only print N characters. (default -1) --discard Discard the output instead of printing. --head int Only print the first N characters. + -h, --help help for cat --offset int Start printing at offset N (or from end if -ve). --tail int Only print the last N characters. ``` @@ -777,7 +859,13 @@ destination. ``` -rclone copyto source:path dest:path +rclone copyto source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for copyto ``` ## rclone cryptcheck @@ -813,7 +901,39 @@ After it has run it will log the status of the encryptedremote:. ``` -rclone cryptcheck remote:path cryptedremote:path +rclone cryptcheck remote:path cryptedremote:path [flags] +``` + +### Options + +``` + -h, --help help for cryptcheck +``` + +## rclone cryptdecode + +Cryptdecode returns unencrypted file names. + +### Synopsis + + + +rclone cryptdecode returns unencrypted file names when provided with +a list of encrypted file names. List limit is 10 items. + +use it like this + + rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 + + +``` +rclone cryptdecode encryptedremote: encryptedfilename [flags] +``` + +### Options + +``` + -h, --help help for cryptdecode ``` ## rclone dbhashsum @@ -831,11 +951,35 @@ The output is in the same format as md5sum and sha1sum. ``` -rclone dbhashsum remote:path +rclone dbhashsum remote:path [flags] +``` + +### Options + +``` + -h, --help help for dbhashsum ``` ## rclone genautocomplete +Output completion script for a given shell. + +### Synopsis + + + +Generates a shell completion script for rclone. +Run with --help to list the supported shells. + + +### Options + +``` + -h, --help help for genautocomplete +``` + +## rclone genautocomplete bash + Output bash completion script for rclone. ### Synopsis @@ -847,7 +991,7 @@ Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg - sudo rclone genautocomplete + sudo rclone genautocomplete bash Logout and login again to use the autocompletion scripts, or source them directly @@ -859,7 +1003,47 @@ there. ``` -rclone genautocomplete [output_file] +rclone genautocomplete bash [output_file] [flags] +``` + +### Options + +``` + -h, --help help for bash +``` + +## rclone genautocomplete zsh + +Output zsh completion script for rclone. + +### Synopsis + + + +Generates a zsh autocompletion script for rclone. + +This writes to /usr/share/zsh/vendor-completions/_rclone by default so will +probably need to be run with sudo or as root, eg + + sudo rclone genautocomplete zsh + +Logout and login again to use the autocompletion scripts, or source +them directly + + autoload -U compinit && compinit + +If you supply a command line argument the script will be written +there. + + +``` +rclone genautocomplete zsh [output_file] [flags] +``` + +### Options + +``` + -h, --help help for zsh ``` ## rclone gendocs @@ -904,6 +1088,7 @@ rclone listremotes [flags] ### Options ``` + -h, --help help for listremotes -l, --long Show the type as well as names. ``` @@ -949,6 +1134,7 @@ rclone lsjson remote:path [flags] ``` --hash Include hashes in the output (may take longer). + -h, --help help for lsjson --no-modtime Don't read the modification time (can speed things up). -R, --recursive Recurse into the listing. ``` @@ -988,6 +1174,34 @@ When that happens, it is the user's responsibility to stop the mount manually wi # OS X umount /path/to/local/mount +### Installing on Windows ### + +To run rclone mount on Windows, you will need to +download and install [WinFsp](http://www.secfs.net/winfsp/). + +WinFsp is an [open source](https://github.com/billziss-gh/winfsp) +Windows File System Proxy which makes it easy to write user space file +systems for Windows. It provides a FUSE emulation layer which rclone +uses combination with +[cgofuse](https://github.com/billziss-gh/cgofuse). Both of these +packages are by Bill Zissimopoulos who was very helpful during the +implementation of rclone mount for Windows. + +#### Windows caveats #### + +Note that drives created as Administrator are not visible by other +accounts (including the account that was elevated as +Administrator). So if you start a Windows drive from an Administrative +Command Prompt and then try to access the same drive from Explorer +(which does not run as Administrator), you will not be able to see the +new drive. + +The easiest way around this is to start the drive from a normal +command prompt. It is also possible to start a drive from the SYSTEM +account (using [the WinFsp.Launcher +infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)) +which creates drives accessible for everyone on the system. + ### Limitations ### This can only write files seqentially, it can only seek when reading. @@ -1033,13 +1247,6 @@ like this: kill -SIGHUP $(pidof rclone) -### Bugs ### - - * All the remotes should work for read, but some may not for write - * those which need to know the size in advance won't - eg B2 - * maybe should pass in size as -1 to mean work it out - * Or put in an an upload cache to cache the files on disk first - ``` rclone mount remote:path /path/to/mountpoint [flags] @@ -1056,6 +1263,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for mount --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). @@ -1107,7 +1315,13 @@ transfer. ``` -rclone moveto source:path dest:path +rclone moveto source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for moveto ``` ## rclone ncdu @@ -1144,7 +1358,13 @@ importantly deleting files, but is useful as it stands. ``` -rclone ncdu remote:path +rclone ncdu remote:path [flags] +``` + +### Options + +``` + -h, --help help for ncdu ``` ## rclone obscure @@ -1157,7 +1377,54 @@ Obscure password for use in the rclone.conf Obscure password for use in the rclone.conf ``` -rclone obscure password +rclone obscure password [flags] +``` + +### Options + +``` + -h, --help help for obscure +``` + +## rclone rcat + +Copies standard input to file on remote. + +### Synopsis + + + +rclone rcat reads from standard input (stdin) and copies it to a +single remote file. + + echo "hello world" | rclone rcat remote:path/to/file + ffmpeg - | rclone rcat --checksum remote:path/to/file + +If the remote file already exists, it will be overwritten. + +rcat will try to upload small files in a single request, which is +usually more efficient than the streaming/chunked upload endpoints, +which use multiple requests. Exact behaviour depends on the remote. +What is considered a small file may be set through +`--streaming-upload-cutoff`. Uploading only starts after +the cutoff is reached or if the file ends before that. The data +must fit into RAM. The cutoff needs to be small enough to adhere +the limits of your remote, please see there. Generally speaking, +setting this cutoff too high will decrease your performance. + +Note that the upload can also not be retried because the data is +not kept around until the upload succeeds. If you need to transfer +a lot of data, you're better off caching locally and then +`rclone move` it to the destination. + +``` +rclone rcat remote:path [flags] +``` + +### Options + +``` + -h, --help help for rcat ``` ## rclone rmdirs @@ -1177,7 +1444,75 @@ empty directories in. ``` -rclone rmdirs remote:path +rclone rmdirs remote:path [flags] +``` + +### Options + +``` + -h, --help help for rmdirs +``` + +## rclone tree + +List the contents of the remote in a tree like fashion. + +### Synopsis + + + +rclone tree lists the contents of a remote in a similar way to the +unix tree command. + +For example + + $ rclone tree remote:path + / + ├── file1 + ├── file2 + ├── file3 + └── subdir + ├── file4 + └── file5 + + 1 directories, 5 files + +You can use any of the filtering options with the tree command (eg +--include and --exclude). You can also use --fast-list. + +The tree command has many options for controlling the listing which +are compatible with the tree command. Note that not all of them have +short options as they conflict with rclone's short options. + + +``` +rclone tree remote:path [flags] +``` + +### Options + +``` + -a, --all All files are listed (list . files too). + -C, --color Turn colorization on always. + -d, --dirs-only List directories only. + --dirsfirst List directories before files (-U disables). + --full-path Print the full path prefix for each file. + -h, --help help for tree + --human Print the size in a more human readable way. + --level int Descend only level directories deep. + -D, --modtime Print the date of last modification. + -i, --noindent Don't print indentation lines. + --noreport Turn off file/directory count at end of tree listing. + -o, --output string Output to file instead of stdout. + -p, --protections Print the protections for each file. + -Q, --quote Quote filenames with double quotes. + -s, --size Print the size in bytes of each file. + --sort string Select sort: name,version,size,mtime,ctime. + --sort-ctime Sort files by last status change time. + -t, --sort-modtime Sort files by last modification time. + -r, --sort-reverse Reverse the order of the sort. + -U, --unsorted Leave files unsorted. + --version Sort files alphanumerically by version. ``` @@ -1336,6 +1671,13 @@ If running rclone from a script you might want to use today's date as the directory name passed to `--backup-dir` to store the old files, or you might want to pass `--suffix` with today's date. +### --bind string ### + +Local address to bind to for outgoing connections. This can be an +IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If +the host name doesn't resolve or resoves to more than one IP address +it will give an error. + ### --bwlimit=BANDWIDTH_SPEC ### This option controls the bandwidth limit. Limits can be specified @@ -1443,6 +1785,27 @@ connection to go through to a remote object storage system. It is Mode to run dedupe command in. One of `interactive`, `skip`, `first`, `newest`, `oldest`, `rename`. The default is `interactive`. See the dedupe command for more information as to what these options mean. +### --disable FEATURE,FEATURE,... ### + +This disables a comma separated list of optional features. For example +to disable server side move and server side copy use: + + --disable move,copy + +The features can be put in in any case. + +To see a list of which features can be disabled use: + + --disable help + +See the overview [features](/overview/#features) and +[optional features](/overview/#optional-features) to get an idea of +which feature does what. + +This flag can be useful for debugging and in exceptional circumstances +(eg Google Drive limiting the total volume of Server Side Copies to +100GB/day). + ### -n, --dry-run ### Do a trial run with no permanent changes. Use this to see what rclone @@ -1490,6 +1853,26 @@ Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using `--checksum`). +### --immutable ### + +Treat source and destination files as immutable and disallow +modification. + +With this option set, files will be created and deleted as requested, +but existing files will never be updated. If an existing file does +not match between the source and destination, rclone will give the error +`Source and destination exist but do not match: immutable file modified`. + +Note that only commands which transfer files (e.g. `sync`, `copy`, +`move`) are affected by this behavior, and only modification is +disallowed. Files may still be deleted explicitly (e.g. `delete`, +`purge`) or implicitly (e.g. `sync`, `move`). Use `copy --immutable` +if it is desired to avoid deletion as well as modification. + +This can be useful as an additional layer of protection for immutable +or append-only data sets (notably backup archives), where modification +implies corruption and should not be propagated. + ### --log-file=FILE ### Log all of rclone's output to FILE. This is not active by default. @@ -1933,6 +2316,9 @@ only. Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only. +Note that the bodies are buffered in memory so don't use this for +enormous files. + ### --dump-filters ### Dump the filters to the output. Useful to see exactly what include @@ -2459,11 +2845,13 @@ processed in. Prepare a file like this `filter-file.txt` - # a sample exclude rule file + # a sample filter rule file - secret*.jpg + *.jpg + *.png + file2.avi + - /dir/Trash/** + + /dir/** # exclude everything else - * @@ -2471,8 +2859,10 @@ Then use as `--filter-from filter-file.txt`. The rules are processed in the order that they are defined. This example will include all `jpg` and `png` files, exclude any files -matching `secret*.jpg` and include `file2.avi`. Everything else will -be excluded from the sync. +matching `secret*.jpg` and include `file2.avi`. It will also include +everything in the directory `dir` at the root of the sync, except +`dir/Trash` which it will exclude. Everything else will be excluded +from the sync. ### `--files-from` - Read list of source-file names ### @@ -2622,37 +3012,43 @@ show through. Here is an overview of the major features of each cloud storage system. -| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | -| ---------------------- |:-------:|:-------:|:----------------:|:---------------:|:---------:| -| Google Drive | MD5 | Yes | No | Yes | R/W | -| Amazon S3 | MD5 | Yes | No | No | R/W | -| Openstack Swift | MD5 | Yes | No | No | R/W | -| Dropbox | DBHASH †| Yes | Yes | No | - | -| Google Cloud Storage | MD5 | Yes | No | No | R/W | -| Amazon Drive | MD5 | No | Yes | No | R | -| Microsoft OneDrive | SHA1 | Yes | Yes | No | R | -| Hubic | MD5 | Yes | No | No | R/W | -| Backblaze B2 | SHA1 | Yes | No | No | R/W | -| Yandex Disk | MD5 | Yes | No | No | R/W | -| SFTP | - | Yes | Depends | No | - | -| FTP | - | No | Yes | No | - | -| HTTP | - | No | Yes | No | R | -| The local filesystem | All | Yes | Depends | No | - | +| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | +| ---------------------------- |:-----------:|:-------:|:----------------:|:---------------:|:---------:| +| Amazon Drive | MD5 | No | Yes | No | R | +| Amazon S3 | MD5 | Yes | No | No | R/W | +| Backblaze B2 | SHA1 | Yes | No | No | R/W | +| Box | SHA1 | Yes | Yes | No | - | +| Dropbox | DBHASH † | Yes | Yes | No | - | +| FTP | - | No | No | No | - | +| Google Cloud Storage | MD5 | Yes | No | No | R/W | +| Google Drive | MD5 | Yes | No | Yes | R/W | +| HTTP | - | No | No | No | R | +| Hubic | MD5 | Yes | No | No | R/W | +| Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W | +| Microsoft OneDrive | SHA1 | Yes | Yes | No | R | +| Openstack Swift | MD5 | Yes | No | No | R/W | +| QingStor | MD5 | No | No | No | R/W | +| SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - | +| Yandex Disk | MD5 | Yes | No | No | R/W | +| The local filesystem | All | Yes | Depends | No | - | ### Hash ### -The cloud storage system supports various hash types of the objects. +The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the `--checksum` flag in syncs and in the `check` command. -To use the checksum checks between filesystems they must support a -common hash type. +To use the verify checksums when transferring between cloud storage +systems they must support a common hash type. † Note that Dropbox supports [its own custom hash](https://www.dropbox.com/developers/reference/content-hash). This is an SHA256 sum of all the 4MB block SHA256s. +‡ SFTP supports checksums if the same login has shell access and `md5sum` +or `sha1sum` as well as `echo` are in the remote's PATH. + ### ModTime ### The cloud storage system supports setting modification times on @@ -2716,23 +3112,25 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. -| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | -| ---------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:| -| Google Drive | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | -| Amazon S3 | No | Yes | No | No | No | Yes | -| Openstack Swift | Yes † | Yes | No | No | No | Yes | -| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | -| Google Cloud Storage | Yes | Yes | No | No | No | Yes | -| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | -| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | -| Hubic | Yes † | Yes | No | No | No | Yes | -| Backblaze B2 | No | No | No | No | Yes | Yes | -| Yandex Disk | Yes | No | No | No | No [#575](https://github.com/ncw/rclone/issues/575) | Yes | -| SFTP | No | No | Yes | Yes | No | No | -| FTP | No | No | Yes | Yes | No | No | -| HTTP | No | No | No | No | No | No | -| The local filesystem | Yes | No | Yes | Yes | No | No | - +| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | +| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:| +| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | +| Amazon S3 | No | Yes | No | No | No | Yes | Yes | +| Backblaze B2 | No | No | No | No | Yes | Yes | Yes | +| Box | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | +| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | +| FTP | No | No | Yes | Yes | No | No | Yes | +| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | +| Google Drive | Yes | Yes | Yes | Yes | Yes | No | Yes | +| HTTP | No | No | No | No | No | No | No | +| Hubic | Yes † | Yes | No | No | No | Yes | Yes | +| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | +| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | +| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | +| QingStor | No | Yes | No | No | No | Yes | No | +| SFTP | No | No | Yes | Yes | No | No | Yes | +| Yandex Disk | Yes | No | No | No | Yes | Yes | Yes | +| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | ### Purge ### @@ -2782,16 +3180,42 @@ The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the `--fast-list` flag to work. See the [rclone docs](/docs/#fast-list) for more details. -Google Drive +### StreamUpload ### + +Some remotes allow files to be uploaded without knowing the file size +in advance. This allows certain operations to work without spooling the +file to local disk first, e.g. `rclone rcat`. + +Amazon Drive ----------------------------------------- -Paths are specified as `drive:path` +Paths are specified as `remote:path` -Drive paths may be as deep as required, eg `drive:directory/subdirectory`. +Paths may be as deep as required, eg `remote:directory/subdirectory`. -The initial setup for drive involves getting a token from Google drive -which you need to do in your browser. `rclone config` walks you -through it. +The initial setup for Amazon Drive involves getting a token from +Amazon which you need to do in your browser. `rclone config` walks +you through it. + +The configuration process for Amazon Drive may involve using an [oauth +proxy](https://github.com/ncw/oauthproxy). This is used to keep the +Amazon credentials out of the source code. The proxy runs in Google's +very secure App Engine environment and doesn't store any credentials +which pass through it. + +**NB** rclone doesn't not currently have its own Amazon Drive +credentials (see [the +forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/) +for why) so you will either need to have your own `client_id` and +`client_secret` with Amazon Drive, or use a a third party ouath proxy +in which case you will need to enter `client_id`, `client_secret`, +`auth_url` and `token_url`. + +Note also if you are not using Amazon's `auth_url` and `token_url`, +(ie you filled in something for those) then if setting up on a remote +machine you can only use the [copying the config method of +configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) +- `rclone authorize` will not work. Here is an example of how to make a remote called `remote`. First run: @@ -2838,15 +3262,20 @@ Choose a number from below, or type in your own value \ "sftp" 14 / Yandex Disk \ "yandex" -Storage> 8 -Google Application Client Id - leave blank normally. -client_id> -Google Application Client Secret - leave blank normally. -client_secret> +Storage> 1 +Amazon Application Client Id - required. +client_id> your client ID goes here +Amazon Application Client Secret - required. +client_secret> your client secret goes here +Auth server URL - leave blank to use Amazon's. +auth_url> Optional auth URL +Token server url - leave blank to use Amazon's. +token_url> Optional token URL Remote config +Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure - * Say N if you are working on a remote or headless machine or Y didn't work + * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y @@ -2854,15 +3283,13 @@ If your browser doesn't open automatically go to the following link: http://127. Log in and authorize rclone for access Waiting for code... Got code -Configure this as a team drive? -y) Yes -n) No -y/n> n -------------------- [remote] -client_id = -client_secret = -token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} +client_id = your client ID goes here +client_secret = your client secret goes here +auth_url = Optional auth URL +token_url = Optional token URL +token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK e) Edit this remote @@ -2870,260 +3297,111 @@ d) Delete this remote y/e/d> y ``` +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if you use auto config mode. This only -runs from the moment it opens your browser to the moment you get back -the verification code. This is on `http://127.0.0.1:53682/` and this -it may require you to unblock it temporarily if you are running a host -firewall, or use manual mode. +token as returned from Amazon. This only runs from the moment it +opens your browser to the moment you get back the verification +code. This is on `http://127.0.0.1:53682/` and this it may require +you to unblock it temporarily if you are running a host firewall. -You can then use it like this, +Once configured you can then use `rclone` like this, -List directories in top level of your drive +List directories in top level of your Amazon Drive rclone lsd remote: -List all the files in your drive +List all the files in your Amazon Drive rclone ls remote: -To copy a local directory to a drive directory called backup +To copy a local directory to an Amazon Drive directory called backup rclone copy /home/source remote:backup -### Team drives ### +### Modified time and MD5SUMs ### -If you want to configure the remote to point to a Google Team Drive -then answer `y` to the question `Configure this as a team drive?`. +Amazon Drive doesn't allow modification times to be changed via +the API so these won't be accurate or used for syncing. -This will fetch the list of Team Drives from google and allow you to -configure which one you want to use. You can also type in a team -drive ID if you prefer. - -For example: - -``` -Configure this as a team drive? -y) Yes -n) No -y/n> y -Fetching team drive list... -Choose a number from below, or type in your own value - 1 / Rclone Test - \ "xxxxxxxxxxxxxxxxxxxx" - 2 / Rclone Test 2 - \ "yyyyyyyyyyyyyyyyyyyy" - 3 / Rclone Test 3 - \ "zzzzzzzzzzzzzzzzzzzz" -Enter a Team Drive ID> 1 --------------------- -[remote] -client_id = -client_secret = -token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} -team_drive = xxxxxxxxxxxxxxxxxxxx --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -### Modified time ### - -Google drive stores modification times accurate to 1 ms. - -### Revisions ### - -Google drive stores revisions of files. When you upload a change to -an existing file to google drive using rclone it will create a new -revision of that file. - -Revisions follow the standard google policy which at time of writing -was - - * They are deleted after 30 days or 100 revisions (whatever comes first). - * They do not count towards a user storage quota. +It does store MD5SUMs so for a more accurate sync, you can use the +`--checksum` flag. ### Deleting files ### -By default rclone will delete files permanently when requested. If -sending them to the trash is required instead then use the -`--drive-use-trash` flag. +Any files you delete with rclone will end up in the trash. Amazon +don't provide an API to permanently delete files, nor to empty the +trash, so you will have to do that with one of Amazon's apps or via +the Amazon Drive website. As of November 17, 2016, files are +automatically deleted by Amazon from the trash after 30 days. + +### Using with non `.com` Amazon accounts ### + +Let's say you usually use `amazon.co.uk`. When you authenticate with +rclone it will take you to an `amazon.com` page to log in. Your +`amazon.co.uk` email and password should work here just fine. ### Specific options ### Here are the command line options specific to this cloud storage system. -#### --drive-auth-owner-only #### +#### --acd-templink-threshold=SIZE #### -Only consider files owned by the authenticated user. +Files this size or more will be downloaded via their `tempLink`. This +is to work around a problem with Amazon Drive which blocks downloads +of files bigger than about 10GB. The default for this is 9GB which +shouldn't need to be changed. -#### --drive-chunk-size=SIZE #### +To download files above this threshold, rclone requests a `tempLink` +which downloads the file through a temporary URL directly from the +underlying S3 storage. -Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB. +#### --acd-upload-wait-per-gb=TIME #### -Making this larger will improve performance, but note that each chunk -is buffered in memory one per transfer. +Sometimes Amazon Drive gives an error when a file has been fully +uploaded but the file appears anyway after a little while. This +happens sometimes for files over 1GB in size and nearly every time for +files bigger than 10GB. This parameter controls the time rclone waits +for the file to appear. -Reducing this will reduce memory usage but decrease performance. +The default value for this parameter is 3 minutes per GB, so by +default it will wait 3 minutes for every GB uploaded to see if the +file appears. -#### --drive-auth-owner-only #### +You can disable this feature by setting it to 0. This may cause +conflict errors as rclone retries the failed upload but the file will +most likely appear correctly eventually. -Only consider files owned by the authenticated user. +These values were determined empirically by observing lots of uploads +of big files for a range of file sizes. -#### --drive-formats #### - -Google documents can only be exported from Google drive. When rclone -downloads a Google doc it chooses a format to download depending upon -this setting. - -By default the formats are `docx,xlsx,pptx,svg` which are a sensible -default for an editable document. - -When choosing a format, rclone runs down the list provided in order -and chooses the first file format the doc can be exported as from the -list. If the file can't be exported to a format on the formats list, -then rclone will choose a format from the default list. - -If you prefer an archive copy then you might use `--drive-formats -pdf`, or if you prefer openoffice/libreoffice formats you might use -`--drive-formats ods,odt,odp`. - -Note that rclone adds the extension to the google doc, so if it is -calles `My Spreadsheet` on google docs, it will be exported as `My -Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc. - -Here are the possible extensions with their corresponding mime types. - -| Extension | Mime Type | Description | -| --------- |-----------| ------------| -| csv | text/csv | Standard CSV format for Spreadsheets | -| doc | application/msword | Micosoft Office Document | -| docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | -| epub | application/epub+zip | E-book format | -| html | text/html | An HTML Document | -| jpg | image/jpeg | A JPEG Image File | -| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation | -| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | -| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | -| odt | application/vnd.oasis.opendocument.text | Openoffice Document | -| pdf | application/pdf | Adobe PDF Format | -| png | image/png | PNG Image Format| -| pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | -| rtf | application/rtf | Rich Text Format | -| svg | image/svg+xml | Scalable Vector Graphics Format | -| tsv | text/tab-separated-values | Standard TSV format for spreadsheets | -| txt | text/plain | Plain Text | -| xls | application/vnd.ms-excel | Microsoft Office Spreadsheet | -| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | -| zip | application/zip | A ZIP file of HTML, Images CSS | - -#### --drive-list-chunk int #### - -Size of listing chunk 100-1000. 0 to disable. (default 1000) - -#### --drive-shared-with-me #### - -Only show files that are shared with me - -#### --drive-skip-gdocs #### - -Skip google documents in all listings. If given, gdocs practically become invisible to rclone. - -#### --drive-trashed-only #### - -Only show files that are in the trash. This will show trashed files -in their original directory structure. - -#### --drive-upload-cutoff=SIZE #### - -File size cutoff for switching to chunked upload. Default is 8 MB. - -#### --drive-use-trash #### - -Send files to the trash instead of deleting permanently. Defaults to -off, namely deleting files permanently. +Upload with the `-v` flag to see more info about what rclone is doing +in this situation. ### Limitations ### -Drive has quite a lot of rate limiting. This causes rclone to be -limited to transferring about 2 files per second only. Individual -files may be transferred much faster at 100s of MBytes/s but lots of -small files can take a long time. +Note that Amazon Drive is case insensitive so you can't have a +file called "Hello.doc" and one called "hello.doc". -### Duplicated files ### +Amazon Drive has rate limiting so you may notice errors in the +sync (429 errors). rclone will automatically retry the sync up to 3 +times by default (see `--retries` flag) which should hopefully work +around this problem. -Sometimes, for no reason I've been able to track down, drive will -duplicate a file that rclone uploads. Drive unlike all the other -remotes can have duplicated files. +Amazon Drive has an internal limit of file sizes that can be uploaded +to the service. This limit is not officially published, but all files +larger than this will fail. -Duplicated files cause problems with the syncing and you will see -messages in the log about duplicates. +At the time of writing (Jan 2016) is in the area of 50GB per file. +This means that larger files are likely to fail. -Use `rclone dedupe` to fix duplicated files. - -Note that this isn't just a problem with rclone, even Google Photos on -Android duplicates files on drive sometimes. - -### Rclone appears to be re-copying files it shouldn't ### - -There are two possible reasons for rclone to recopy files which -haven't changed to Google Drive. - -The first is the duplicated file issue above - run `rclone dedupe` and -check your logs for duplicate object or directory messages. - -The second is that sometimes Google reports different sizes for the -Google Docs exports which will cause rclone to re-download Google Docs -for no apparent reason. `--ignore-size` is a not very satisfactory -work-around for this if it is causing you a lot of problems. - -### Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y" ### - -This is the same problem as above. Google reports the google doc is -one size, but rclone downloads a different size. Work-around with the -`--ignore-size` flag or wait for rclone to retry the download which it -will. - -### Making your own client_id ### - -When you use rclone with Google drive in its default configuration you -are using rclone's client_id. This is shared between all the rclone -users. There is a global rate limit on the number of queries per -second that each client_id can do set by Google. rclone already has a -high quota and I will continue to make sure it is high enough by -contacting Google. - -However you might find you get better performance making your own -client_id if you are a heavy user. Or you may not depending on exactly -how Google have been raising rclone's rate limit. - -Here is how to create your own Google Drive client ID for rclone: - -1. Log into the [Google API -Console](https://console.developers.google.com/) with your Google -account. It doesn't matter what Google account you use. (It need not -be the same account as the Google Drive you want to access) - -2. Select a project or create a new project. - -3. Under Overview, Google APIs, Google Apps APIs, click "Drive API", -then "Enable". - -4. Click "Credentials" in the left-side panel (not "Go to -credentials", which opens the wizard), then "Create credentials", then -"OAuth client ID". It will prompt you to set the OAuth consent screen -product name, if you haven't set one already. - -5. Choose an application type of "other", and click "Create". (the -default name is fine) - -6. It will show you a client ID and client secret. Use these values -in rclone config to add a new remote or edit an existing remote. - -(Thanks to @balazer on github for these instructions.) +Unfortunately there is no way for rclone to see that this failure is +because of file size, so it will retry the operation, as any other +failure. To avoid this problem, use `--max-size 50000M` option to limit +the maximum size of uploaded files. Note that `--max-size` does not split +files into segments, it only ignores files over this size. Amazon S3 --------------------------------------- @@ -3361,12 +3639,14 @@ There are two ways to supply `rclone` with a set of AWS credentials. In order of precedence: - Directly in the rclone configuration file (as configured by `rclone config`) - - set `access_key_id` and `secret_access_key` + - set `access_key_id` and `secret_access_key`. `session_token` can be + optionally set when using AWS STS. - Runtime configuration: - set `env_auth` to `true` in the config file - Exporting the following environment variables before running `rclone` - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` + - Session Token: `AWS_SESSION_TOKEN` - Running `rclone` on an EC2 instance with an IAM role If none of these option actually end up providing `rclone` with AWS @@ -3420,6 +3700,17 @@ Notes on above: For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with `rclone sync`. +### Glacier ### + +You can transition objects to glacier storage using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). +The bucket can still be synced or copied into normally, but if rclone +tries to access the data you will see an error like below. + + 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file + +In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html) +the object(s) in question before using rclone. + ### Specific options ### Here are the command line options specific to this cloud storage @@ -3590,491 +3881,96 @@ So once set up, for example to copy files into a bucket rclone copy /path/to/files minio:bucket ``` -Swift ----------------------------------------- +### Wasabi ### -Swift refers to [Openstack Object Storage](https://www.openstack.org/software/openstack-storage/). -Commercial implementations of that being: +[Wasabi](https://wasabi.com) is a cloud-based object storage service for a +broad range of applications and use cases. Wasabi is designed for +individuals and organizations that require a high-performance, +reliable, and secure data storage infrastructure at minimal cost. - * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) - * [Memset Memstore](https://www.memset.com/cloud/storage/) - -Paths are specified as `remote:container` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. - -Here is an example of making a swift configuration. First run - - rclone config - -This will guide you through an interactive setup process. +Wasabi provides an S3 interface which can be configured for use with +rclone like this. ``` No remotes found - make a new one n) New remote s) Set configuration password n/s> n -name> remote +name> wasabi Type of storage to configure. Choose a number from below, or type in your own value 1 / Amazon Drive \ "amazon cloud drive" 2 / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 11 -User name to log in. -user> user_name -API key or password. -key> password_or_api_key -Authentication URL for server. +[snip] +Storage> s3 +Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value - 1 / Rackspace US - \ "https://auth.api.rackspacecloud.com/v1.0" - 2 / Rackspace UK - \ "https://lon.auth.api.rackspacecloud.com/v1.0" - 3 / Rackspace v2 - \ "https://identity.api.rackspacecloud.com/v2.0" - 4 / Memset Memstore UK - \ "https://auth.storage.memset.com/v1.0" - 5 / Memset Memstore UK v2 - \ "https://auth.storage.memset.com/v2.0" - 6 / OVH - \ "https://auth.cloud.ovh.net/v2.0" -auth> 1 -User domain - optional (v3 auth) -domain> Default -Tenant name - optional for v1 auth, required otherwise -tenant> tenant_name -Tenant domain - optional (v3 auth) -tenant_domain> -Region name - optional -region> -Storage URL - optional -storage_url> -AuthVersion - optional - set to (1,2,3) if your auth URL has no version -auth_version> -Remote config --------------------- -[remote] -user = user_name -key = password_or_api_key -auth = https://auth.api.rackspacecloud.com/v1.0 -domain = Default -tenant = -tenant_domain = -region = -storage_url = -auth_version = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -This remote is called `remote` and can now be used like this - -See all containers - - rclone lsd remote: - -Make a new container - - rclone mkdir remote:container - -List the contents of a container - - rclone ls remote:container - -Sync `/home/local/directory` to the remote container, deleting any -excess files in the container. - - rclone sync /home/local/directory remote:container - -### Configuration from an Openstack credentials file ### - -An Opentstack credentials file typically looks something something -like this (without the comments) - -``` -export OS_AUTH_URL=https://a.provider.net/v2.0 -export OS_TENANT_ID=ffffffffffffffffffffffffffffffff -export OS_TENANT_NAME="1234567890123456" -export OS_USERNAME="123abc567xy" -echo "Please enter your OpenStack Password: " -read -sr OS_PASSWORD_INPUT -export OS_PASSWORD=$OS_PASSWORD_INPUT -export OS_REGION_NAME="SBG1" -if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi -``` - -The config file needs to look something like this where `$OS_USERNAME` -represents the value of the `OS_USERNAME` variable - `123abc567xy` in -the example above. - -``` -[remote] -type = swift -user = $OS_USERNAME -key = $OS_PASSWORD -auth = $OS_AUTH_URL -tenant = $OS_TENANT_NAME -``` - -Note that you may (or may not) need to set `region` too - try without first. - -### --fast-list ### - -This remote supports `--fast-list` which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](/docs/#fast-list) for more details. - -### Specific options ### - -Here are the command line options specific to this cloud storage -system. - -#### --swift-chunk-size=SIZE #### - -Above this size files will be chunked into a _segments container. The -default for this is 5GB which is its maximum value. - -### Modified time ### - -The modified time is stored as metadata on the object as -`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 -ns. - -This is a defacto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. - -### Limitations ### - -The Swift API doesn't return a correct MD5SUM for segmented files -(Dynamic or Static Large Objects) so rclone won't check or use the -MD5SUM for these. - -### Troubleshooting ### - -#### Rclone gives Failed to create file system for "remote:": Bad Request #### - -Due to an oddity of the underlying swift library, it gives a "Bad -Request" error rather than a more sensible error when the -authentication fails for Swift. - -So this most likely means your username / password is wrong. You can -investigate further with the `--dump-bodies` flag. - -This may also be caused by specifying the region when you shouldn't -have (eg OVH). - -#### Rclone gives Failed to create file system: Response didn't have storage storage url and auth token #### - -This is most likely caused by forgetting to specify your tenant when -setting up a swift remote. - -Dropbox ---------------------------------- - -Paths are specified as `remote:path` - -Dropbox paths may be as deep as required, eg -`remote:directory/subdirectory`. - -The initial setup for dropbox involves getting a token from Dropbox -which you need to do in your browser. `rclone config` walks you -through it. - -Here is an example of how to make a remote called `remote`. First run: - - rclone config - -This will guide you through an interactive setup process: - -``` -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n -name> remote -Type of storage to configure. + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +AWS Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> YOURACCESSKEY +AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> YOURSECRETACCESSKEY +Region to connect to. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 4 -Dropbox App Key - leave blank normally. -app_key> -Dropbox App Secret - leave blank normally. -app_secret> -Remote config -Please visit: -https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code -Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX --------------------- -[remote] -app_key = -app_secret = -token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -You can then use it like this, - -List directories in top level of your dropbox - - rclone lsd remote: - -List all the files in your dropbox - - rclone ls remote: - -To copy a local directory to a dropbox directory called backup - - rclone copy /home/source remote:backup - -### Modified time and Hashes ### - -Dropbox supports modified times, but the only way to set a -modification time is to re-upload the file. - -This means that if you uploaded your data with an older version of -rclone which didn't support the v2 API and modified times, rclone will -decide to upload all your old data to fix the modification times. If -you don't want this to happen use `--size-only` or `--checksum` flag -to stop it. - -Dropbox supports [its own hash -type](https://www.dropbox.com/developers/reference/content-hash) which -is checked for all transfers. - -### Specific options ### - -Here are the command line options specific to this cloud storage -system. - -#### --dropbox-chunk-size=SIZE #### - -Upload chunk size. Max 150M. The default is 128MB. Note that this -isn't buffered into memory. - -### Limitations ### - -Note that Dropbox is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". - -There are some file names such as `thumbs.db` which Dropbox can't -store. There is a full list of them in the ["Ignored Files" section -of this document](https://www.dropbox.com/en/help/145). Rclone will -issue an error message `File name disallowed - not uploading` if it -attempt to upload one of those file names, but the sync won't fail. - -If you have more than 10,000 files in a directory then `rclone purge -dropbox:dir` will return the error `Failed to purge: There are too -many files involved in this operation`. As a work-around do an -`rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`. - -Google Cloud Storage -------------------------------------------------- - -Paths are specified as `remote:bucket` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. - -The initial setup for google cloud storage involves getting a token from Google Cloud Storage -which you need to do in your browser. `rclone config` walks you -through it. - -Here is an example of how to make a remote called `remote`. First run: - - rclone config - -This will guide you through an interactive setup process: - -``` -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n -name> remote -Type of storage to configure. + / The default endpoint - a good choice if you are unsure. + 1 | US Region, Northern Virginia or Pacific Northwest. + | Leave location constraint empty. + \ "us-east-1" +[snip] +region> us-east-1 +Endpoint for S3 API. +Leave blank if using AWS to use the default endpoint for the region. +Specify if using an S3 clone such as Ceph. +endpoint> s3.wasabisys.com +Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 6 -Google Application Client Id - leave blank normally. -client_id> -Google Application Client Secret - leave blank normally. -client_secret> -Project number optional - needed only for list/create/delete buckets - see your developer console. -project_number> 12345678 -Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. -service_account_file> -Access Control List for new objects. -Choose a number from below, or type in your own value - 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. - \ "authenticatedRead" - 2 / Object owner gets OWNER access, and project team owners get OWNER access. - \ "bucketOwnerFullControl" - 3 / Object owner gets OWNER access, and project team owners get READER access. - \ "bucketOwnerRead" - 4 / Object owner gets OWNER access [default if left blank]. - \ "private" - 5 / Object owner gets OWNER access, and project team members get access according to their roles. - \ "projectPrivate" - 6 / Object owner gets OWNER access, and all Users get READER access. - \ "publicRead" -object_acl> 4 -Access Control List for new buckets. -Choose a number from below, or type in your own value - 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. - \ "authenticatedRead" - 2 / Project team owners get OWNER access [default if left blank]. - \ "private" - 3 / Project team members get access according to their roles. - \ "projectPrivate" - 4 / Project team owners get OWNER access, and all Users get READER access. - \ "publicRead" - 5 / Project team owners get OWNER access, and all Users get WRITER access. - \ "publicReadWrite" -bucket_acl> 2 -Location for the newly created buckets. -Choose a number from below, or type in your own value - 1 / Empty for default location (US). + 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" - 2 / Multi-regional location for Asia. - \ "asia" - 3 / Multi-regional location for Europe. - \ "eu" - 4 / Multi-regional location for United States. - \ "us" - 5 / Taiwan. - \ "asia-east1" - 6 / Tokyo. - \ "asia-northeast1" - 7 / Singapore. - \ "asia-southeast1" - 8 / Sydney. - \ "australia-southeast1" - 9 / Belgium. - \ "europe-west1" -10 / London. - \ "europe-west2" -11 / Iowa. - \ "us-central1" -12 / South Carolina. - \ "us-east1" -13 / Northern Virginia. - \ "us-east4" -14 / Oregon. - \ "us-west1" -location> 12 -The storage class to use when storing objects in Google Cloud Storage. +[snip] +location_constraint> +Canned ACL used when creating buckets and/or storing objects in S3. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" +[snip] +acl> +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value + 1 / None + \ "" + 2 / AES256 + \ "AES256" +server_side_encryption> +The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" - 2 / Multi-regional storage class - \ "MULTI_REGIONAL" - 3 / Regional storage class - \ "REGIONAL" - 4 / Nearline storage class - \ "NEARLINE" - 5 / Coldline storage class - \ "COLDLINE" - 6 / Durable reduced availability storage class - \ "DURABLE_REDUCED_AVAILABILITY" -storage_class> 5 + 2 / Standard storage class + \ "STANDARD" + 3 / Reduced redundancy storage class + \ "REDUCED_REDUNDANCY" + 4 / Standard Infrequent Access storage class + \ "STANDARD_IA" +storage_class> Remote config -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine or Y didn't work -y) Yes -n) No -y/n> y -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code -------------------- -[remote] -type = google cloud storage -client_id = -client_secret = -token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} -project_number = 12345678 -object_acl = private -bucket_acl = private +[wasabi] +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = us-east-1 +endpoint = s3.wasabisys.com +location_constraint = +acl = +server_side_encryption = +storage_class = -------------------- y) Yes this is OK e) Edit this remote @@ -4082,569 +3978,21 @@ d) Delete this remote y/e/d> y ``` -Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if you use auto config mode. This only -runs from the moment it opens your browser to the moment you get back -the verification code. This is on `http://127.0.0.1:53682/` and this -it may require you to unblock it temporarily if you are running a host -firewall, or use manual mode. - -This remote is called `remote` and can now be used like this - -See all the buckets in your project - - rclone lsd remote: - -Make a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync `/home/local/directory` to the remote bucket, deleting any excess -files in the bucket. - - rclone sync /home/local/directory remote:bucket - -### Service Account support ### - -You can set up rclone with Google Cloud Storage in an unattended mode, -i.e. not tied to a specific end-user Google account. This is useful -when you want to synchronise files onto machines that don't have -actively logged-in users, for example build machines. - -To get credentials for Google Cloud Platform -[IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts), -please head to the -[Service Account](https://console.cloud.google.com/permissions/serviceaccounts) -section of the Google Developer Console. Service Accounts behave just -like normal `User` permissions in -[Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control), -so you can limit their access (e.g. make them read only). After -creating an account, a JSON file containing the Service Account's -credentials will be downloaded onto your machines. These credentials -are what rclone will use for authentication. - -To use a Service Account instead of OAuth2 token flow, enter the path -to your Service Account credentials at the `service_account_file` -prompt and rclone won't use the browser based authentication -flow. - -### --fast-list ### - -This remote supports `--fast-list` which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](/docs/#fast-list) for more details. - -### Modified time ### - -Google google cloud storage stores md5sums natively and rclone stores -modification times as metadata on the object, under the "mtime" key in -RFC3339 format accurate to 1ns. - -Amazon Drive ------------------------------------------ - -Paths are specified as `remote:path` - -Paths may be as deep as required, eg `remote:directory/subdirectory`. - -The initial setup for Amazon Drive involves getting a token from -Amazon which you need to do in your browser. `rclone config` walks -you through it. - -The configuration process for Amazon Drive may involve using an [oauth -proxy](https://github.com/ncw/oauthproxy). This is used to keep the -Amazon credentials out of the source code. The proxy runs in Google's -very secure App Engine environment and doesn't store any credentials -which pass through it. - -**NB** rclone doesn't not currently have its own Amazon Drive -credentials (see [the -forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/) -for why) so you will either need to have your own `client_id` and -`client_secret` with Amazon Drive, or use a a third party ouath proxy -in which case you will need to enter `client_id`, `client_secret`, -`auth_url` and `token_url`. - -Note also if you are not using Amazon's `auth_url` and `token_url`, -(ie you filled in something for those) then if setting up on a remote -machine you can only use the [copying the config method of -configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) -- `rclone authorize` will not work. - -Here is an example of how to make a remote called `remote`. First run: - - rclone config - -This will guide you through an interactive setup process: +This will leave the config file looking like this. ``` -No remotes found - make a new one -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -Storage> 1 -Amazon Application Client Id - required. -client_id> your client ID goes here -Amazon Application Client Secret - required. -client_secret> your client secret goes here -Auth server URL - leave blank to use Amazon's. -auth_url> Optional auth URL -Token server url - leave blank to use Amazon's. -token_url> Optional token URL -Remote config -Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine -y) Yes -n) No -y/n> y -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = your client ID goes here -client_secret = your client secret goes here -auth_url = Optional auth URL -token_url = Optional token URL -token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y +[wasabi] +env_auth = false +access_key_id = YOURACCESSKEY +secret_access_key = YOURSECRETACCESSKEY +region = us-east-1 +endpoint = s3.wasabisys.com +location_constraint = +acl = +server_side_encryption = +storage_class = ``` -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Amazon. This only runs from the moment it -opens your browser to the moment you get back the verification -code. This is on `http://127.0.0.1:53682/` and this it may require -you to unblock it temporarily if you are running a host firewall. - -Once configured you can then use `rclone` like this, - -List directories in top level of your Amazon Drive - - rclone lsd remote: - -List all the files in your Amazon Drive - - rclone ls remote: - -To copy a local directory to an Amazon Drive directory called backup - - rclone copy /home/source remote:backup - -### Modified time and MD5SUMs ### - -Amazon Drive doesn't allow modification times to be changed via -the API so these won't be accurate or used for syncing. - -It does store MD5SUMs so for a more accurate sync, you can use the -`--checksum` flag. - -### Deleting files ### - -Any files you delete with rclone will end up in the trash. Amazon -don't provide an API to permanently delete files, nor to empty the -trash, so you will have to do that with one of Amazon's apps or via -the Amazon Drive website. As of November 17, 2016, files are -automatically deleted by Amazon from the trash after 30 days. - -### Using with non `.com` Amazon accounts ### - -Let's say you usually use `amazon.co.uk`. When you authenticate with -rclone it will take you to an `amazon.com` page to log in. Your -`amazon.co.uk` email and password should work here just fine. - -### Specific options ### - -Here are the command line options specific to this cloud storage -system. - -#### --acd-templink-threshold=SIZE #### - -Files this size or more will be downloaded via their `tempLink`. This -is to work around a problem with Amazon Drive which blocks downloads -of files bigger than about 10GB. The default for this is 9GB which -shouldn't need to be changed. - -To download files above this threshold, rclone requests a `tempLink` -which downloads the file through a temporary URL directly from the -underlying S3 storage. - -#### --acd-upload-wait-per-gb=TIME #### - -Sometimes Amazon Drive gives an error when a file has been fully -uploaded but the file appears anyway after a little while. This -happens sometimes for files over 1GB in size and nearly every time for -files bigger than 10GB. This parameter controls the time rclone waits -for the file to appear. - -The default value for this parameter is 3 minutes per GB, so by -default it will wait 3 minutes for every GB uploaded to see if the -file appears. - -You can disable this feature by setting it to 0. This may cause -conflict errors as rclone retries the failed upload but the file will -most likely appear correctly eventually. - -These values were determined empirically by observing lots of uploads -of big files for a range of file sizes. - -Upload with the `-v` flag to see more info about what rclone is doing -in this situation. - -### Limitations ### - -Note that Amazon Drive is case insensitive so you can't have a -file called "Hello.doc" and one called "hello.doc". - -Amazon Drive has rate limiting so you may notice errors in the -sync (429 errors). rclone will automatically retry the sync up to 3 -times by default (see `--retries` flag) which should hopefully work -around this problem. - -Amazon Drive has an internal limit of file sizes that can be uploaded -to the service. This limit is not officially published, but all files -larger than this will fail. - -At the time of writing (Jan 2016) is in the area of 50GB per file. -This means that larger files are likely to fail. - -Unfortunately there is no way for rclone to see that this failure is -because of file size, so it will retry the operation, as any other -failure. To avoid this problem, use `--max-size 50000M` option to limit -the maximum size of uploaded files. Note that `--max-size` does not split -files into segments, it only ignores files over this size. - -Microsoft OneDrive ------------------------------------------ - -Paths are specified as `remote:path` - -Paths may be as deep as required, eg `remote:directory/subdirectory`. - -The initial setup for OneDrive involves getting a token from -Microsoft which you need to do in your browser. `rclone config` walks -you through it. - -Here is an example of how to make a remote called `remote`. First run: - - rclone config - -This will guide you through an interactive setup process: - -``` -No remotes found - make a new one -n) New remote -s) Set configuration password -n/s> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 10 -Microsoft App Client Id - leave blank normally. -client_id> -Microsoft App Client Secret - leave blank normally. -client_secret> -Remote config -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine -y) Yes -n) No -y/n> y -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = -client_secret = -token = {"access_token":"XXXXXX"} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Microsoft. This only runs from the moment it -opens your browser to the moment you get back the verification -code. This is on `http://127.0.0.1:53682/` and this it may require -you to unblock it temporarily if you are running a host firewall. - -Once configured you can then use `rclone` like this, - -List directories in top level of your OneDrive - - rclone lsd remote: - -List all the files in your OneDrive - - rclone ls remote: - -To copy a local directory to an OneDrive directory called backup - - rclone copy /home/source remote:backup - -### Modified time and hashes ### - -OneDrive allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. - -One drive supports SHA1 type hashes, so you can use `--checksum` flag. - - -### Deleting files ### - -Any files you delete with rclone will end up in the trash. Microsoft -doesn't provide an API to permanently delete files, nor to empty the -trash, so you will have to do that with one of Microsoft's apps or via -the OneDrive website. - -### Specific options ### - -Here are the command line options specific to this cloud storage -system. - -#### --onedrive-chunk-size=SIZE #### - -Above this size files will be chunked - must be multiple of 320k. The -default is 10MB. Note that the chunks will be buffered into memory. - -#### --onedrive-upload-cutoff=SIZE #### - -Cutoff for switching to chunked upload - must be <= 100MB. The default -is 10MB. - -### Limitations ### - -Note that OneDrive is case insensitive so you can't have a -file called "Hello.doc" and one called "hello.doc". - -Rclone only supports your default OneDrive, and doesn't work with One -Drive for business. Both these issues may be fixed at some point -depending on user demand! - -There are quite a few characters that can't be in OneDrive file -names. These can't occur on Windows platforms, but on non-Windows -platforms they are common. Rclone will map these names to and from an -identical looking unicode equivalent. For example if a file has a `?` -in it will be mapped to `?` instead. - -The largest allowed file size is 10GiB (10,737,418,240 bytes). - -Hubic ------------------------------------------ - -Paths are specified as `remote:path` - -Paths are specified as `remote:container` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. - -The initial setup for Hubic involves getting a token from Hubic which -you need to do in your browser. `rclone config` walks you through it. - -Here is an example of how to make a remote called `remote`. First run: - - rclone config - -This will guide you through an interactive setup process: - -``` -n) New remote -s) Set configuration password -n/s> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 8 -Hubic Client Id - leave blank normally. -client_id> -Hubic Client Secret - leave blank normally. -client_secret> -Remote config -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine -y) Yes -n) No -y/n> y -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = -client_secret = -token = {"access_token":"XXXXXX"} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Hubic. This only runs from the moment it opens -your browser to the moment you get back the verification code. This -is on `http://127.0.0.1:53682/` and this it may require you to unblock -it temporarily if you are running a host firewall. - -Once configured you can then use `rclone` like this, - -List containers in the top level of your Hubic - - rclone lsd remote: - -List all the files in your Hubic - - rclone ls remote: - -To copy a local directory to an Hubic directory called backup - - rclone copy /home/source remote:backup - -If you want the directory to be visible in the official *Hubic -browser*, you need to copy your files to the `default` directory - - rclone copy /home/source remote:default/backup - -### --fast-list ### - -This remote supports `--fast-list` which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](/docs/#fast-list) for more details. - -### Modified time ### - -The modified time is stored as metadata on the object as -`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 -ns. - -This is a defacto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. - -Note that Hubic wraps the Swift backend, so most of the properties of -are the same. - -### Limitations ### - -This uses the normal OpenStack Swift mechanism to refresh the Swift -API credentials and ignores the expires field returned by the Hubic -API. - -The Swift API doesn't return a correct MD5SUM for segmented files -(Dynamic or Static Large Objects) so rclone won't check or use the -MD5SUM for these. - Backblaze B2 ---------------------------------------- @@ -4779,10 +4127,13 @@ used. When rclone uploads a new version of a file it creates a [new version of it](https://www.backblaze.com/b2/docs/file_versions.html). -Likewise when you delete a file, the old version will still be -available. +Likewise when you delete a file, the old version will be marked hidden +and still be available. Conversely, you may opt in to a "hard delete" +of files with the `--b2-hard-delete` flag which would permanently remove +the file instead of hiding it. -Old versions of files are visible using the `--b2-versions` flag. +Old versions of files, where available, are visible using the +`--b2-versions` flag. If you wish to remove all the old versions then you can use the `rclone cleanup remote:bucket` command which will delete all the old @@ -4945,419 +4296,17 @@ server to the nearest millisecond appended to them. Note that when using `--b2-versions` no file write operations are permitted, so you can't upload files or delete them. -Yandex Disk ----------------------------------------- +Box +----------------------------------------- -[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com). +Paths are specified as `remote:path` -Yandex paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, eg `remote:directory/subdirectory`. -Here is an example of making a yandex configuration. First run +The initial setup for Box involves getting a token from Box which you +need to do in your browser. `rclone config` walks you through it. - rclone config - -This will guide you through an interactive setup process: - -``` -No remotes found - make a new one -n) New remote -s) Set configuration password -n/s> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 13 -Yandex Client Id - leave blank normally. -client_id> -Yandex Client Secret - leave blank normally. -client_secret> -Remote config -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine -y) Yes -n) No -y/n> y -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = -client_secret = -token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Yandex Disk. This only runs from the moment it -opens your browser to the moment you get back the verification code. -This is on `http://127.0.0.1:53682/` and this it may require you to -unblock it temporarily if you are running a host firewall. - -Once configured you can then use `rclone` like this, - -See top level directories - - rclone lsd remote: - -Make a new directory - - rclone mkdir remote:directory - -List the contents of a directory - - rclone ls remote:directory - -Sync `/home/local/directory` to the remote path, deleting any -excess files in the path. - - rclone sync /home/local/directory remote:directory - -### --fast-list ### - -This remote supports `--fast-list` which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](/docs/#fast-list) for more details. - -### Modified time ### - -Modified times are supported and are stored accurate to 1 ns in custom -metadata called `rclone_modified` in RFC3339 with nanoseconds format. - -### MD5 checksums ### - -MD5 checksums are natively supported by Yandex Disk. - -SFTP ----------------------------------------- - -SFTP is the [Secure (or SSH) File Transfer -Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). - -It runs over SSH v2 and is standard with most modern SSH -installations. - -Paths are specified as `remote:path`. If the path does not begin with -a `/` it is relative to the home directory of the user. An empty path -`remote:` refers to the users home directory. - -Here is an example of making a SFTP configuration. First run - - rclone config - -This will guide you through an interactive setup process. - -``` -No remotes found - make a new one -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -15 / http Connection - \ "http" -Storage> sftp -SSH host to connect to -Choose a number from below, or type in your own value - 1 / Connect to example.com - \ "example.com" -host> example.com -SSH username, leave blank for current username, ncw -user> sftpuser -SSH port, leave blank to use default (22) -port> -SSH password, leave blank to use ssh-agent. -y) Yes type in my own password -g) Generate random password -n) No leave this optional password blank -y/g/n> n -Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. -key_file> -Remote config --------------------- -[remote] -host = example.com -user = sftpuser -port = -pass = -key_file = --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -This remote is called `remote` and can now be used like this - -See all directories in the home directory - - rclone lsd remote: - -Make a new directory - - rclone mkdir remote:path/to/directory - -List the contents of a directory - - rclone ls remote:path/to/directory - -Sync `/home/local/directory` to the remote directory, deleting any -excess files in the directory. - - rclone sync /home/local/directory remote:directory - -### SSH Authentication ### - -The SFTP remote supports 3 authentication methods - - * Password - * Key file - * ssh-agent - -Key files should be unencrypted PEM-encoded private key files. For -instance `/home/$USER/.ssh/id_rsa`. - -If you don't specify `pass` or `key_file` then it will attempt to -contact an ssh-agent. - -### ssh-agent on macOS ### - -Note that there seem to be various problems with using an ssh-agent on -macOS due to recent changes in the OS. The most effective work-around -seems to be to start an ssh-agent in each session, eg - - eval `ssh-agent -s` && ssh-add -A - -And then at the end of the session - - eval `ssh-agent -k` - -These commands can be used in scripts of course. - -### Modified time ### - -Modified times are stored on the server to 1 second precision. - -Modified times are used in syncing and are fully supported. - -### Limitations ### - -SFTP does not support any checksums. - -The only ssh agent supported under Windows is Putty's pagent. - -SFTP isn't supported under plan9 until [this -issue](https://github.com/pkg/sftp/issues/156) is fixed. - -Note that since SFTP isn't HTTP based the following flags don't work -with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` - -Note that `--timeout` isn't supported (but `--contimeout` is). - -FTP ------------------------------- - -FTP is the File Transfer Protocol. FTP support is provided using the -[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp) -package. - -Here is an example of making an FTP configuration. First run - - rclone config - -This will guide you through an interactive setup process. An FTP remote only -needs a host together with and a username and a password. With anonymous FTP -server, you will need to use `anonymous` as username and your email address as -the password. - -``` -No remotes found - make a new one -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -Storage> ftp -FTP host to connect to -Choose a number from below, or type in your own value - 1 / Connect to ftp.example.com - \ "ftp.example.com" -host> ftp.example.com -FTP username, leave blank for current username, ncw -user> -FTP port, leave blank to use default (21) -port> -FTP password -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -Remote config --------------------- -[remote] -host = ftp.example.com -user = -port = -pass = *** ENCRYPTED *** --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -This remote is called `remote` and can now be used like this - -See all directories in the home directory - - rclone lsd remote: - -Make a new directory - - rclone mkdir remote:path/to/directory - -List the contents of a directory - - rclone ls remote:path/to/directory - -Sync `/home/local/directory` to the remote directory, deleting any -excess files in the directory. - - rclone sync /home/local/directory remote:directory - -### Modified time ### - -FTP does not support modified times. Any times you see on the server -will be time of upload. - -### Checksums ### - -FTP does not support any checksums. - -### Limitations ### - -Note that since FTP isn't HTTP based the following flags don't work -with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` - -Note that `--timeout` isn't supported (but `--contimeout` is). - -FTP could support server side move but doesn't yet. - -HTTP -------------------------------------------------- - -The HTTP remote is a read only remote for reading files of a -webserver. The webserver should provide file listings which rclone -will read and turn into a remote. This has been tested with common -webservers such as Apache/Nginx/Caddy and will likely work with file -listings from most web servers. (If it doesn't then please file an -issue, or send a pull request!) - -Paths are specified as `remote:` or `remote:path/to/dir`. - -Here is an example of how to make a remote called `remote`. First -run: +Here is an example of how to make a remote called `remote`. First run: rclone config @@ -5378,50 +4327,113 @@ Choose a number from below, or type in your own value \ "s3" 3 / Backblaze B2 \ "b2" - 4 / Dropbox + 4 / Box + \ "box" + 5 / Dropbox \ "dropbox" - 5 / Encrypt/Decrypt a remote + 6 / Encrypt/Decrypt a remote \ "crypt" - 6 / FTP Connection + 7 / FTP Connection \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) + 8 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 8 / Google Drive + 9 / Google Drive \ "drive" - 9 / Hubic +10 / Hubic \ "hubic" -10 / Local Disk +11 / Local Disk \ "local" -11 / Microsoft OneDrive +12 / Microsoft OneDrive \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -13 / SSH/SFTP Connection +14 / SSH/SFTP Connection \ "sftp" -14 / Yandex Disk +15 / Yandex Disk \ "yandex" -15 / http Connection +16 / http Connection \ "http" -Storage> http -URL of http host to connect to -Choose a number from below, or type in your own value - 1 / Connect to example.com - \ "https://example.com" -url> https://beta.rclone.org +Storage> box +Box App Client Id - leave blank normally. +client_id> +Box App Client Secret - leave blank normally. +client_secret> Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code -------------------- [remote] -url = https://beta.rclone.org +client_id = +client_secret = +token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +``` + +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Box. This only runs from the moment it opens +your browser to the moment you get back the verification code. This +is on `http://127.0.0.1:53682/` and this it may require you to unblock +it temporarily if you are running a host firewall. + +Once configured you can then use `rclone` like this, + +List directories in top level of your Box + + rclone lsd remote: + +List all the files in your Box + + rclone ls remote: + +To copy a local directory to an Box directory called backup + + rclone copy /home/source remote:backup + +### Invalid refresh token ### + +According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): + +> Each refresh_token is valid for one use in 60 days. + +This means that if you + + * Don't use the box remote for 60 days + * Copy the config file with a box refresh token in and use it in two places + * Get an error on a token refresh + +then rclone will return an error which includes the text `Invalid +refresh token`. + +To fix this you will need to use oauth2 again to update the refresh +token. You can use the methods in [the remote setup +docs](https://rclone.org/remote_setup/), bearing in mind that if you use the copy the +config file method, you should not use that remote on the computer you +did the authentication on. + +Here is how to do it. + +``` +$ rclone config Current remotes: Name Type ==== ==== -remote http +remote box e) Edit existing remote n) New remote @@ -5430,51 +4442,92 @@ r) Rename remote c) Copy remote s) Set configuration password q) Quit config -e/n/d/r/c/s/q> q +e/n/d/r/c/s/q> e +Choose a number from below, or type in an existing value + 1 > remote +remote> remote +-------------------- +[remote] +type = box +token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} +-------------------- +Edit remote +Value "client_id" = "" +Edit? (y/n)> +y) Yes +n) No +y/n> n +Value "client_secret" = "" +Edit? (y/n)> +y) Yes +n) No +y/n> n +Remote config +Already have a token - refresh? +y) Yes +n) No +y/n> y +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = box +token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y ``` -This remote is called `remote` and can now be used like this +### Modified time and hashes ### -See all the top level directories +Box allows modification times to be set on objects accurate to 1 +second. These will be used to detect whether objects need syncing or +not. - rclone lsd remote: +One drive supports SHA1 type hashes, so you can use the `--checksum` +flag. -List the contents of a directory +### Transfers ### - rclone ls remote:directory +For files above 50MB rclone will use a chunked transfer. Rclone will +upload up to `--transfers` chunks at the same time (shared among all +the multipart uploads). Chunks are buffered in memory and are +normally 8MB so increasing `--transfers` will increase memory use. -Sync the remote `directory` to `/home/local/directory`, deleting any excess files. +### Deleting files ### - rclone sync remote:directory /home/local/directory +Depending on the enterprise settings for your user, the item will +either be actually deleted from Box or moved to the trash. -### Read only ### +### Specific options ### -This remote is read only - you can't upload files to an HTTP server. +Here are the command line options specific to this cloud storage +system. -### Modified time ### +#### --box-upload-cutoff=SIZE #### -Most HTTP servers store time accurate to 1 second. +Cutoff for switching to chunked upload - must be >= 50MB. The default +is 50MB. -### Checksum ### +### Limitations ### -No checksums are stored. +Note that Box is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". -### Usage without a config file ### +Box file names can't have the `\` character in. rclone maps this to +and from an identical looking unicode equivalent `\`. -Note that since only two environment variable need to be set, it is -easy to use without a config file like this. - -``` -RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz: -``` - -Or if you prefer - -``` -export RCLONE_CONFIG_ZZ_TYPE=http -export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org -rclone lsd zz: -``` +Box only supports filenames up to 255 characters in length. Crypt ---------------------------------------- @@ -5881,6 +4934,2113 @@ then rclone uses an internal one. encrypted data. For full protection agains this you should always use a salt. +Dropbox +--------------------------------- + +Paths are specified as `remote:path` + +Dropbox paths may be as deep as required, eg +`remote:directory/subdirectory`. + +The initial setup for dropbox involves getting a token from Dropbox +which you need to do in your browser. `rclone config` walks you +through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +n) New remote +d) Delete remote +q) Quit config +e/n/d/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" +10 / Microsoft OneDrive + \ "onedrive" +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +12 / SSH/SFTP Connection + \ "sftp" +13 / Yandex Disk + \ "yandex" +Storage> 4 +Dropbox App Key - leave blank normally. +app_key> +Dropbox App Secret - leave blank normally. +app_secret> +Remote config +Please visit: +https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code +Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX +-------------------- +[remote] +app_key = +app_secret = +token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +You can then use it like this, + +List directories in top level of your dropbox + + rclone lsd remote: + +List all the files in your dropbox + + rclone ls remote: + +To copy a local directory to a dropbox directory called backup + + rclone copy /home/source remote:backup + +### Modified time and Hashes ### + +Dropbox supports modified times, but the only way to set a +modification time is to re-upload the file. + +This means that if you uploaded your data with an older version of +rclone which didn't support the v2 API and modified times, rclone will +decide to upload all your old data to fix the modification times. If +you don't want this to happen use `--size-only` or `--checksum` flag +to stop it. + +Dropbox supports [its own hash +type](https://www.dropbox.com/developers/reference/content-hash) which +is checked for all transfers. + +### Specific options ### + +Here are the command line options specific to this cloud storage +system. + +#### --dropbox-chunk-size=SIZE #### + +Upload chunk size. Max 150M. The default is 128MB. Note that this +isn't buffered into memory. + +### Limitations ### + +Note that Dropbox is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". + +There are some file names such as `thumbs.db` which Dropbox can't +store. There is a full list of them in the ["Ignored Files" section +of this document](https://www.dropbox.com/en/help/145). Rclone will +issue an error message `File name disallowed - not uploading` if it +attempt to upload one of those file names, but the sync won't fail. + +If you have more than 10,000 files in a directory then `rclone purge +dropbox:dir` will return the error `Failed to purge: There are too +many files involved in this operation`. As a work-around do an +`rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`. + +FTP +------------------------------ + +FTP is the File Transfer Protocol. FTP support is provided using the +[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp) +package. + +Here is an example of making an FTP configuration. First run + + rclone config + +This will guide you through an interactive setup process. An FTP remote only +needs a host together with and a username and a password. With anonymous FTP +server, you will need to use `anonymous` as username and your email address as +the password. + +``` +No remotes found - make a new one +n) New remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +n/r/c/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" +10 / Local Disk + \ "local" +11 / Microsoft OneDrive + \ "onedrive" +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +13 / SSH/SFTP Connection + \ "sftp" +14 / Yandex Disk + \ "yandex" +Storage> ftp +FTP host to connect to +Choose a number from below, or type in your own value + 1 / Connect to ftp.example.com + \ "ftp.example.com" +host> ftp.example.com +FTP username, leave blank for current username, ncw +user> +FTP port, leave blank to use default (21) +port> +FTP password +y) Yes type in my own password +g) Generate random password +y/g> y +Enter the password: +password: +Confirm the password: +password: +Remote config +-------------------- +[remote] +host = ftp.example.com +user = +port = +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This remote is called `remote` and can now be used like this + +See all directories in the home directory + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:path/to/directory + +List the contents of a directory + + rclone ls remote:path/to/directory + +Sync `/home/local/directory` to the remote directory, deleting any +excess files in the directory. + + rclone sync /home/local/directory remote:directory + +### Modified time ### + +FTP does not support modified times. Any times you see on the server +will be time of upload. + +### Checksums ### + +FTP does not support any checksums. + +### Limitations ### + +Note that since FTP isn't HTTP based the following flags don't work +with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` + +Note that `--timeout` isn't supported (but `--contimeout` is). + +Note that `--bind` isn't supported. + +FTP could support server side move but doesn't yet. + +Google Cloud Storage +------------------------------------------------- + +Paths are specified as `remote:bucket` (or `remote:` for the `lsd` +command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. + +The initial setup for google cloud storage involves getting a token from Google Cloud Storage +which you need to do in your browser. `rclone config` walks you +through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +n) New remote +d) Delete remote +q) Quit config +e/n/d/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" +10 / Microsoft OneDrive + \ "onedrive" +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +12 / SSH/SFTP Connection + \ "sftp" +13 / Yandex Disk + \ "yandex" +Storage> 6 +Google Application Client Id - leave blank normally. +client_id> +Google Application Client Secret - leave blank normally. +client_secret> +Project number optional - needed only for list/create/delete buckets - see your developer console. +project_number> 12345678 +Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. +service_account_file> +Access Control List for new objects. +Choose a number from below, or type in your own value + 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. + \ "authenticatedRead" + 2 / Object owner gets OWNER access, and project team owners get OWNER access. + \ "bucketOwnerFullControl" + 3 / Object owner gets OWNER access, and project team owners get READER access. + \ "bucketOwnerRead" + 4 / Object owner gets OWNER access [default if left blank]. + \ "private" + 5 / Object owner gets OWNER access, and project team members get access according to their roles. + \ "projectPrivate" + 6 / Object owner gets OWNER access, and all Users get READER access. + \ "publicRead" +object_acl> 4 +Access Control List for new buckets. +Choose a number from below, or type in your own value + 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. + \ "authenticatedRead" + 2 / Project team owners get OWNER access [default if left blank]. + \ "private" + 3 / Project team members get access according to their roles. + \ "projectPrivate" + 4 / Project team owners get OWNER access, and all Users get READER access. + \ "publicRead" + 5 / Project team owners get OWNER access, and all Users get WRITER access. + \ "publicReadWrite" +bucket_acl> 2 +Location for the newly created buckets. +Choose a number from below, or type in your own value + 1 / Empty for default location (US). + \ "" + 2 / Multi-regional location for Asia. + \ "asia" + 3 / Multi-regional location for Europe. + \ "eu" + 4 / Multi-regional location for United States. + \ "us" + 5 / Taiwan. + \ "asia-east1" + 6 / Tokyo. + \ "asia-northeast1" + 7 / Singapore. + \ "asia-southeast1" + 8 / Sydney. + \ "australia-southeast1" + 9 / Belgium. + \ "europe-west1" +10 / London. + \ "europe-west2" +11 / Iowa. + \ "us-central1" +12 / South Carolina. + \ "us-east1" +13 / Northern Virginia. + \ "us-east4" +14 / Oregon. + \ "us-west1" +location> 12 +The storage class to use when storing objects in Google Cloud Storage. +Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Multi-regional storage class + \ "MULTI_REGIONAL" + 3 / Regional storage class + \ "REGIONAL" + 4 / Nearline storage class + \ "NEARLINE" + 5 / Coldline storage class + \ "COLDLINE" + 6 / Durable reduced availability storage class + \ "DURABLE_REDUCED_AVAILABILITY" +storage_class> 5 +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine or Y didn't work +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = google cloud storage +client_id = +client_secret = +token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} +project_number = 12345678 +object_acl = private +bucket_acl = private +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on `http://127.0.0.1:53682/` and this +it may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +This remote is called `remote` and can now be used like this + +See all the buckets in your project + + rclone lsd remote: + +Make a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync `/home/local/directory` to the remote bucket, deleting any excess +files in the bucket. + + rclone sync /home/local/directory remote:bucket + +### Service Account support ### + +You can set up rclone with Google Cloud Storage in an unattended mode, +i.e. not tied to a specific end-user Google account. This is useful +when you want to synchronise files onto machines that don't have +actively logged-in users, for example build machines. + +To get credentials for Google Cloud Platform +[IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts), +please head to the +[Service Account](https://console.cloud.google.com/permissions/serviceaccounts) +section of the Google Developer Console. Service Accounts behave just +like normal `User` permissions in +[Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control), +so you can limit their access (e.g. make them read only). After +creating an account, a JSON file containing the Service Account's +credentials will be downloaded onto your machines. These credentials +are what rclone will use for authentication. + +To use a Service Account instead of OAuth2 token flow, enter the path +to your Service Account credentials at the `service_account_file` +prompt and rclone won't use the browser based authentication +flow. + +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + +### Modified time ### + +Google google cloud storage stores md5sums natively and rclone stores +modification times as metadata on the object, under the "mtime" key in +RFC3339 format accurate to 1ns. + +Google Drive +----------------------------------------- + +Paths are specified as `drive:path` + +Drive paths may be as deep as required, eg `drive:directory/subdirectory`. + +The initial setup for drive involves getting a token from Google drive +which you need to do in your browser. `rclone config` walks you +through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +n/r/c/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" +10 / Local Disk + \ "local" +11 / Microsoft OneDrive + \ "onedrive" +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +13 / SSH/SFTP Connection + \ "sftp" +14 / Yandex Disk + \ "yandex" +Storage> 8 +Google Application Client Id - leave blank normally. +client_id> +Google Application Client Secret - leave blank normally. +client_secret> +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine or Y didn't work +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +Configure this as a team drive? +y) Yes +n) No +y/n> n +-------------------- +[remote] +client_id = +client_secret = +token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on `http://127.0.0.1:53682/` and this +it may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +You can then use it like this, + +List directories in top level of your drive + + rclone lsd remote: + +List all the files in your drive + + rclone ls remote: + +To copy a local directory to a drive directory called backup + + rclone copy /home/source remote:backup + +### Team drives ### + +If you want to configure the remote to point to a Google Team Drive +then answer `y` to the question `Configure this as a team drive?`. + +This will fetch the list of Team Drives from google and allow you to +configure which one you want to use. You can also type in a team +drive ID if you prefer. + +For example: + +``` +Configure this as a team drive? +y) Yes +n) No +y/n> y +Fetching team drive list... +Choose a number from below, or type in your own value + 1 / Rclone Test + \ "xxxxxxxxxxxxxxxxxxxx" + 2 / Rclone Test 2 + \ "yyyyyyyyyyyyyyyyyyyy" + 3 / Rclone Test 3 + \ "zzzzzzzzzzzzzzzzzzzz" +Enter a Team Drive ID> 1 +-------------------- +[remote] +client_id = +client_secret = +token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} +team_drive = xxxxxxxxxxxxxxxxxxxx +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +### Modified time ### + +Google drive stores modification times accurate to 1 ms. + +### Revisions ### + +Google drive stores revisions of files. When you upload a change to +an existing file to google drive using rclone it will create a new +revision of that file. + +Revisions follow the standard google policy which at time of writing +was + + * They are deleted after 30 days or 100 revisions (whatever comes first). + * They do not count towards a user storage quota. + +### Deleting files ### + +By default rclone will send all files to the trash when deleting +files. If deleting them permanently is required then use the +`--drive-use-trash=false` flag, or set the equivalent environment +variable. + +### Emptying trash ### + +If you wish to empty your trash you can use the `rclone cleanup remote:` +command which will permanently delete all your trashed files. This command +does not take any path arguments. + +### Specific options ### + +Here are the command line options specific to this cloud storage +system. + +#### --drive-auth-owner-only #### + +Only consider files owned by the authenticated user. + +#### --drive-chunk-size=SIZE #### + +Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB. + +Making this larger will improve performance, but note that each chunk +is buffered in memory one per transfer. + +Reducing this will reduce memory usage but decrease performance. + +#### --drive-formats #### + +Google documents can only be exported from Google drive. When rclone +downloads a Google doc it chooses a format to download depending upon +this setting. + +By default the formats are `docx,xlsx,pptx,svg` which are a sensible +default for an editable document. + +When choosing a format, rclone runs down the list provided in order +and chooses the first file format the doc can be exported as from the +list. If the file can't be exported to a format on the formats list, +then rclone will choose a format from the default list. + +If you prefer an archive copy then you might use `--drive-formats +pdf`, or if you prefer openoffice/libreoffice formats you might use +`--drive-formats ods,odt,odp`. + +Note that rclone adds the extension to the google doc, so if it is +calles `My Spreadsheet` on google docs, it will be exported as `My +Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc. + +Here are the possible extensions with their corresponding mime types. + +| Extension | Mime Type | Description | +| --------- |-----------| ------------| +| csv | text/csv | Standard CSV format for Spreadsheets | +| doc | application/msword | Micosoft Office Document | +| docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | +| epub | application/epub+zip | E-book format | +| html | text/html | An HTML Document | +| jpg | image/jpeg | A JPEG Image File | +| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation | +| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | +| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | +| odt | application/vnd.oasis.opendocument.text | Openoffice Document | +| pdf | application/pdf | Adobe PDF Format | +| png | image/png | PNG Image Format| +| pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | +| rtf | application/rtf | Rich Text Format | +| svg | image/svg+xml | Scalable Vector Graphics Format | +| tsv | text/tab-separated-values | Standard TSV format for spreadsheets | +| txt | text/plain | Plain Text | +| xls | application/vnd.ms-excel | Microsoft Office Spreadsheet | +| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | +| zip | application/zip | A ZIP file of HTML, Images CSS | + +#### --drive-list-chunk int #### + +Size of listing chunk 100-1000. 0 to disable. (default 1000) + +#### --drive-shared-with-me #### + +Only show files that are shared with me + +#### --drive-skip-gdocs #### + +Skip google documents in all listings. If given, gdocs practically become invisible to rclone. + +#### --drive-trashed-only #### + +Only show files that are in the trash. This will show trashed files +in their original directory structure. + +#### --drive-upload-cutoff=SIZE #### + +File size cutoff for switching to chunked upload. Default is 8 MB. + +#### --drive-use-trash #### + +Controls whether files are sent to the trash or deleted +permanently. Defaults to true, namely sending files to the trash. Use +`--drive-use-trash=false` to delete files permanently instead. + +### Limitations ### + +Drive has quite a lot of rate limiting. This causes rclone to be +limited to transferring about 2 files per second only. Individual +files may be transferred much faster at 100s of MBytes/s but lots of +small files can take a long time. + +Server side copies are also subject to a separate rate limit. If you +see User rate limit exceeded errors, wait at least 24 hours and retry. +You can disable server side copies with `--disable copy` to download +and upload the files if you prefer. + +### Duplicated files ### + +Sometimes, for no reason I've been able to track down, drive will +duplicate a file that rclone uploads. Drive unlike all the other +remotes can have duplicated files. + +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. + +Use `rclone dedupe` to fix duplicated files. + +Note that this isn't just a problem with rclone, even Google Photos on +Android duplicates files on drive sometimes. + +### Rclone appears to be re-copying files it shouldn't ### + +There are two possible reasons for rclone to recopy files which +haven't changed to Google Drive. + +The first is the duplicated file issue above - run `rclone dedupe` and +check your logs for duplicate object or directory messages. + +The second is that sometimes Google reports different sizes for the +Google Docs exports which will cause rclone to re-download Google Docs +for no apparent reason. `--ignore-size` is a not very satisfactory +work-around for this if it is causing you a lot of problems. + +### Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y" ### + +This is the same problem as above. Google reports the google doc is +one size, but rclone downloads a different size. Work-around with the +`--ignore-size` flag or wait for rclone to retry the download which it +will. + +### Making your own client_id ### + +When you use rclone with Google drive in its default configuration you +are using rclone's client_id. This is shared between all the rclone +users. There is a global rate limit on the number of queries per +second that each client_id can do set by Google. rclone already has a +high quota and I will continue to make sure it is high enough by +contacting Google. + +However you might find you get better performance making your own +client_id if you are a heavy user. Or you may not depending on exactly +how Google have been raising rclone's rate limit. + +Here is how to create your own Google Drive client ID for rclone: + +1. Log into the [Google API +Console](https://console.developers.google.com/) with your Google +account. It doesn't matter what Google account you use. (It need not +be the same account as the Google Drive you want to access) + +2. Select a project or create a new project. + +3. Under Overview, Google APIs, Google Apps APIs, click "Drive API", +then "Enable". + +4. Click "Credentials" in the left-side panel (not "Go to +credentials", which opens the wizard), then "Create credentials", then +"OAuth client ID". It will prompt you to set the OAuth consent screen +product name, if you haven't set one already. + +5. Choose an application type of "other", and click "Create". (the +default name is fine) + +6. It will show you a client ID and client secret. Use these values +in rclone config to add a new remote or edit an existing remote. + +(Thanks to @balazer on github for these instructions.) + +HTTP +------------------------------------------------- + +The HTTP remote is a read only remote for reading files of a +webserver. The webserver should provide file listings which rclone +will read and turn into a remote. This has been tested with common +webservers such as Apache/Nginx/Caddy and will likely work with file +listings from most web servers. (If it doesn't then please file an +issue, or send a pull request!) + +Paths are specified as `remote:` or `remote:path/to/dir`. + +Here is an example of how to make a remote called `remote`. First +run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" +10 / Local Disk + \ "local" +11 / Microsoft OneDrive + \ "onedrive" +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +13 / SSH/SFTP Connection + \ "sftp" +14 / Yandex Disk + \ "yandex" +15 / http Connection + \ "http" +Storage> http +URL of http host to connect to +Choose a number from below, or type in your own value + 1 / Connect to example.com + \ "https://example.com" +url> https://beta.rclone.org +Remote config +-------------------- +[remote] +url = https://beta.rclone.org +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +remote http + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> q +``` + +This remote is called `remote` and can now be used like this + +See all the top level directories + + rclone lsd remote: + +List the contents of a directory + + rclone ls remote:directory + +Sync the remote `directory` to `/home/local/directory`, deleting any excess files. + + rclone sync remote:directory /home/local/directory + +### Read only ### + +This remote is read only - you can't upload files to an HTTP server. + +### Modified time ### + +Most HTTP servers store time accurate to 1 second. + +### Checksum ### + +No checksums are stored. + +### Usage without a config file ### + +Note that since only two environment variable need to be set, it is +easy to use without a config file like this. + +``` +RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz: +``` + +Or if you prefer + +``` +export RCLONE_CONFIG_ZZ_TYPE=http +export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org +rclone lsd zz: +``` + +Hubic +----------------------------------------- + +Paths are specified as `remote:path` + +Paths are specified as `remote:container` (or `remote:` for the `lsd` +command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. + +The initial setup for Hubic involves getting a token from Hubic which +you need to do in your browser. `rclone config` walks you through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +n) New remote +s) Set configuration password +n/s> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" +10 / Microsoft OneDrive + \ "onedrive" +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +12 / SSH/SFTP Connection + \ "sftp" +13 / Yandex Disk + \ "yandex" +Storage> 8 +Hubic Client Id - leave blank normally. +client_id> +Hubic Client Secret - leave blank normally. +client_secret> +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +client_id = +client_secret = +token = {"access_token":"XXXXXX"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Hubic. This only runs from the moment it opens +your browser to the moment you get back the verification code. This +is on `http://127.0.0.1:53682/` and this it may require you to unblock +it temporarily if you are running a host firewall. + +Once configured you can then use `rclone` like this, + +List containers in the top level of your Hubic + + rclone lsd remote: + +List all the files in your Hubic + + rclone ls remote: + +To copy a local directory to an Hubic directory called backup + + rclone copy /home/source remote:backup + +If you want the directory to be visible in the official *Hubic +browser*, you need to copy your files to the `default` directory + + rclone copy /home/source remote:default/backup + +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + +### Modified time ### + +The modified time is stored as metadata on the object as +`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 +ns. + +This is a defacto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. + +Note that Hubic wraps the Swift backend, so most of the properties of +are the same. + +### Limitations ### + +This uses the normal OpenStack Swift mechanism to refresh the Swift +API credentials and ignores the expires field returned by the Hubic +API. + +The Swift API doesn't return a correct MD5SUM for segmented files +(Dynamic or Static Large Objects) so rclone won't check or use the +MD5SUM for these. + +Microsoft Azure Blob Storage +----------------------------------------- + +Paths are specified as `remote:container` (or `remote:` for the `lsd` +command.) You may put subdirectories in too, eg +`remote:container/path/to/dir`. + +Here is an example of making a Microsoft Azure Blob Storage +configuration. For a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Box + \ "box" + 5 / Dropbox + \ "dropbox" + 6 / Encrypt/Decrypt a remote + \ "crypt" + 7 / FTP Connection + \ "ftp" + 8 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 9 / Google Drive + \ "drive" +10 / Hubic + \ "hubic" +11 / Local Disk + \ "local" +12 / Microsoft Azure Blob Storage + \ "azureblob" +13 / Microsoft OneDrive + \ "onedrive" +14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +15 / SSH/SFTP Connection + \ "sftp" +16 / Yandex Disk + \ "yandex" +17 / http Connection + \ "http" +Storage> azureblob +Storage Account Name +account> account_name +Storage Account Key +key> base64encodedkey== +Endpoint for the service - leave blank normally. +endpoint> +Remote config +-------------------- +[remote] +account = account_name +key = base64encodedkey== +endpoint = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +See all containers + + rclone lsd remote: + +Make a new container + + rclone mkdir remote:container + +List the contents of a container + + rclone ls remote:container + +Sync `/home/local/directory` to the remote container, deleting any excess +files in the container. + + rclone sync /home/local/directory remote:container + +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + +### Modified time ### + +The modified time is stored as metadata on the object with the `mtime` +key. It is stored using RFC3339 Format time with nanosecond +precision. The metadata is supplied during directory listings so +there is no overhead to using it. + +### Hashes ### + +MD5 hashes are stored with blobs. However blobs that were uploaded in +chunks only have an MD5 if the source remote was capable of MD5 +hashes, eg the local disk. + +### Multipart uploads ### + +Rclone supports multipart uploads with Azure Blob storage. Files +bigger than 256MB will be uploaded using chunked upload by default. + +The files will be uploaded in parallel in 4MB chunks (by default). +Note that these chunks are buffered in memory and there may be up to +`--transfers` of them being uploaded at once. + +Files can't be split into more than 50,000 chunks so by default, so +the largest file that can be uploaded with 4MB chunk size is 195GB. +Above this rclone will double the chunk size until it creates less +than 50,000 chunks. By default this will mean a maximum file size of +3.2TB can be uploaded. This can be raised to 5TB using +`--azureblob-chunk-size 100M`. + +Note that rclone doesn't commit the block list until the end of the +upload which means that there is a limit of 9.5TB of multipart uploads +in progress as Azure won't allow more than that amount of uncommitted +blocks. + +### Specific options ### + +Here are the command line options specific to this cloud storage +system. + +#### --azureblob-upload-cutoff=SIZE #### + +Cutoff for switching to chunked upload - must be <= 256MB. The default +is 256MB. + +#### --azureblob-chunk-size=SIZE #### + +Upload chunk size. Default 4MB. Note that this is stored in memory +and there may be up to `--transfers` chunks stored at once in memory. +This can be at most 100MB. + +### Limitations ### + +MD5 sums are only uploaded with chunked files if the source has an MD5 +sum. This will always be the case for a local to azure copy. + +Microsoft OneDrive +----------------------------------------- + +Paths are specified as `remote:path` + +Paths may be as deep as required, eg `remote:directory/subdirectory`. + +The initial setup for OneDrive involves getting a token from +Microsoft which you need to do in your browser. `rclone config` walks +you through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +n/s> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" +10 / Microsoft OneDrive + \ "onedrive" +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +12 / SSH/SFTP Connection + \ "sftp" +13 / Yandex Disk + \ "yandex" +Storage> 10 +Microsoft App Client Id - leave blank normally. +client_id> +Microsoft App Client Secret - leave blank normally. +client_secret> +Remote config +Choose OneDrive account type? + * Say b for a OneDrive business account + * Say p for a personal OneDrive account +b) Business +p) Personal +b/p> p +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +client_id = +client_secret = +token = {"access_token":"XXXXXX"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Microsoft. This only runs from the moment it +opens your browser to the moment you get back the verification +code. This is on `http://127.0.0.1:53682/` and this it may require +you to unblock it temporarily if you are running a host firewall. + +Once configured you can then use `rclone` like this, + +List directories in top level of your OneDrive + + rclone lsd remote: + +List all the files in your OneDrive + + rclone ls remote: + +To copy a local directory to an OneDrive directory called backup + + rclone copy /home/source remote:backup + +### OneDrive for Business ### + +There is additional support for OneDrive for Business. +Select "b" when ask +``` +Choose OneDrive account type? + * Say b for a OneDrive business account + * Say p for a personal OneDrive account +b) Business +p) Personal +b/p> +``` +After that rclone requires an authentication of your account. The application will first authenticate your account, then query the OneDrive resource URL +and do a second (silent) authentication for this resource URL. + +### Modified time and hashes ### + +OneDrive allows modification times to be set on objects accurate to 1 +second. These will be used to detect whether objects need syncing or +not. + +One drive supports SHA1 type hashes, so you can use `--checksum` flag. + + +### Deleting files ### + +Any files you delete with rclone will end up in the trash. Microsoft +doesn't provide an API to permanently delete files, nor to empty the +trash, so you will have to do that with one of Microsoft's apps or via +the OneDrive website. + +### Specific options ### + +Here are the command line options specific to this cloud storage +system. + +#### --onedrive-chunk-size=SIZE #### + +Above this size files will be chunked - must be multiple of 320k. The +default is 10MB. Note that the chunks will be buffered into memory. + +#### --onedrive-upload-cutoff=SIZE #### + +Cutoff for switching to chunked upload - must be <= 100MB. The default +is 10MB. + +### Limitations ### + +Note that OneDrive is case insensitive so you can't have a +file called "Hello.doc" and one called "hello.doc". + +There are quite a few characters that can't be in OneDrive file +names. These can't occur on Windows platforms, but on non-Windows +platforms they are common. Rclone will map these names to and from an +identical looking unicode equivalent. For example if a file has a `?` +in it will be mapped to `?` instead. + +The largest allowed file size is 10GiB (10,737,418,240 bytes). + +QingStor +--------------------------------------- + +Paths are specified as `remote:bucket` (or `remote:` for the `lsd` +command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. + +Here is an example of making an QingStor configuration. First run + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found - make a new one +n) New remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +n/r/c/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" +10 / Local Disk + \ "local" +11 / Microsoft OneDrive + \ "onedrive" +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +13 / QingStor Object Storage + \ "qingstor" +14 / SSH/SFTP Connection + \ "sftp" +15 / Yandex Disk + \ "yandex" +Storage> 13 +Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own value + 1 / Enter QingStor credentials in the next step + \ "false" + 2 / Get QingStor credentials from the environment (env vars or IAM) + \ "true" +env_auth> 1 +QingStor Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> access_key +QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> secret_key +Enter a endpoint URL to connection QingStor API. +Leave blank will use the default value "https://qingstor.com:443" +endpoint> +Zone connect to. Default is "pek3a". +Choose a number from below, or type in your own value + / The Beijing (China) Three Zone + 1 | Needs location constraint pek3a. + \ "pek3a" + / The Shanghai (China) First Zone + 2 | Needs location constraint sh1a. + \ "sh1a" +zone> 1 +Number of connnection retry. +Leave blank will use the default value "3". +connection_retries> +Remote config +-------------------- +[remote] +env_auth = false +access_key_id = access_key +secret_access_key = secret_key +endpoint = +zone = pek3a +connection_retries = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This remote is called `remote` and can now be used like this + +See all buckets + + rclone lsd remote: + +Make a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync `/home/local/directory` to the remote bucket, deleting any excess +files in the bucket. + + rclone sync /home/local/directory remote:bucket + +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + +### Multipart uploads ### + +rclone supports multipart uploads with QingStor which means that it can +upload files bigger than 5GB. Note that files uploaded with multipart +upload don't have an MD5SUM. + +### Buckets and Zone ### + +With QingStor you can list buckets (`rclone lsd`) using any zone, +but you can only access the content of a bucket from the zone it was +created in. If you attempt to access a bucket from the wrong zone, +you will get an error, `incorrect zone, the bucket is not in 'XXX' +zone`. + +### Authentication ### + +There are two ways to supply `rclone` with a set of QingStor +credentials. In order of precedence: + + - Directly in the rclone configuration file (as configured by `rclone config`) + - set `access_key_id` and `secret_access_key` + - Runtime configuration: + - set `env_auth` to `true` in the config file + - Exporting the following environment variables before running `rclone` + - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` + - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` + +Swift +---------------------------------------- + +Swift refers to [Openstack Object Storage](https://docs.openstack.org/swift/latest/). +Commercial implementations of that being: + + * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) + * [Memset Memstore](https://www.memset.com/cloud/storage/) + * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) + * [Oracle Cloud Storage](https://cloud.oracle.com/storage-opc) + +Paths are specified as `remote:container` (or `remote:` for the `lsd` +command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. + +Here is an example of making a swift configuration. First run + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Box + \ "box" + 5 / Dropbox + \ "dropbox" + 6 / Encrypt/Decrypt a remote + \ "crypt" + 7 / FTP Connection + \ "ftp" + 8 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 9 / Google Drive + \ "drive" +10 / Hubic + \ "hubic" +11 / Local Disk + \ "local" +12 / Microsoft Azure Blob Storage + \ "azureblob" +13 / Microsoft OneDrive + \ "onedrive" +14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +15 / QingClound Object Storage + \ "qingstor" +16 / SSH/SFTP Connection + \ "sftp" +17 / Yandex Disk + \ "yandex" +18 / http Connection + \ "http" +Storage> swift +Get swift credentials from environment variables in standard OpenStack form. +Choose a number from below, or type in your own value + 1 / Enter swift credentials in the next step + \ "false" + 2 / Get swift credentials from environment vars. Leave other fields blank if using this. + \ "true" +env_auth> 1 +User name to log in. +user> user_name +API key or password. +key> password_or_api_key +Authentication URL for server. +Choose a number from below, or type in your own value + 1 / Rackspace US + \ "https://auth.api.rackspacecloud.com/v1.0" + 2 / Rackspace UK + \ "https://lon.auth.api.rackspacecloud.com/v1.0" + 3 / Rackspace v2 + \ "https://identity.api.rackspacecloud.com/v2.0" + 4 / Memset Memstore UK + \ "https://auth.storage.memset.com/v1.0" + 5 / Memset Memstore UK v2 + \ "https://auth.storage.memset.com/v2.0" + 6 / OVH + \ "https://auth.cloud.ovh.net/v2.0" +auth> 1 +User domain - optional (v3 auth) +domain> Default +Tenant name - optional for v1 auth, required otherwise +tenant> tenant_name +Tenant domain - optional (v3 auth) +tenant_domain> +Region name - optional +region> +Storage URL - optional +storage_url> +AuthVersion - optional - set to (1,2,3) if your auth URL has no version +auth_version> +Endpoint type to choose from the service catalogue +Choose a number from below, or type in your own value + 1 / Public (default, choose this if not sure) + \ "public" + 2 / Internal (use internal service net) + \ "internal" + 3 / Admin + \ "admin" +endpoint_type> +Remote config +-------------------- +[remote] +env_auth = false +user = user_name +key = password_or_api_key +auth = https://auth.api.rackspacecloud.com/v1.0 +domain = Default +tenant = +tenant_domain = +region = +storage_url = +auth_version = +endpoint_type = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This remote is called `remote` and can now be used like this + +See all containers + + rclone lsd remote: + +Make a new container + + rclone mkdir remote:container + +List the contents of a container + + rclone ls remote:container + +Sync `/home/local/directory` to the remote container, deleting any +excess files in the container. + + rclone sync /home/local/directory remote:container + +### Configuration from an Openstack credentials file ### + +An Opentstack credentials file typically looks something something +like this (without the comments) + +``` +export OS_AUTH_URL=https://a.provider.net/v2.0 +export OS_TENANT_ID=ffffffffffffffffffffffffffffffff +export OS_TENANT_NAME="1234567890123456" +export OS_USERNAME="123abc567xy" +echo "Please enter your OpenStack Password: " +read -sr OS_PASSWORD_INPUT +export OS_PASSWORD=$OS_PASSWORD_INPUT +export OS_REGION_NAME="SBG1" +if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi +``` + +The config file needs to look something like this where `$OS_USERNAME` +represents the value of the `OS_USERNAME` variable - `123abc567xy` in +the example above. + +``` +[remote] +type = swift +user = $OS_USERNAME +key = $OS_PASSWORD +auth = $OS_AUTH_URL +tenant = $OS_TENANT_NAME +``` + +Note that you may (or may not) need to set `region` too - try without first. + +### Configuration from the environment ### + +If you prefer you can configure rclone to use swift using a standard +set of OpenStack environment variables. + +When you run through the config, make sure you choose `true` for +`env_auth` and leave everything else blank. + +rclone will then set any empty config parameters from the enviroment +using standard OpenStack environment variables. There is [a list of +the +variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) +in the docs for the swift library. + +#### Using rclone without a config file #### + +You can use rclone with swift without a config file, if desired, like +this: + +``` +source openstack-credentials-file +export RCLONE_CONFIG_MYREMOTE_TYPE=swift +export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true +rclone lsd myremote: +``` + +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + +### Specific options ### + +Here are the command line options specific to this cloud storage +system. + +#### --swift-chunk-size=SIZE #### + +Above this size files will be chunked into a _segments container. The +default for this is 5GB which is its maximum value. + +### Modified time ### + +The modified time is stored as metadata on the object as +`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 +ns. + +This is a defacto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. + +### Limitations ### + +The Swift API doesn't return a correct MD5SUM for segmented files +(Dynamic or Static Large Objects) so rclone won't check or use the +MD5SUM for these. + +### Troubleshooting ### + +#### Rclone gives Failed to create file system for "remote:": Bad Request #### + +Due to an oddity of the underlying swift library, it gives a "Bad +Request" error rather than a more sensible error when the +authentication fails for Swift. + +So this most likely means your username / password is wrong. You can +investigate further with the `--dump-bodies` flag. + +This may also be caused by specifying the region when you shouldn't +have (eg OVH). + +#### Rclone gives Failed to create file system: Response didn't have storage storage url and auth token #### + +This is most likely caused by forgetting to specify your tenant when +setting up a swift remote. + +SFTP +---------------------------------------- + +SFTP is the [Secure (or SSH) File Transfer +Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). + +It runs over SSH v2 and is standard with most modern SSH +installations. + +Paths are specified as `remote:path`. If the path does not begin with +a `/` it is relative to the home directory of the user. An empty path +`remote:` refers to the users home directory. + +Here is an example of making a SFTP configuration. First run + + rclone config + +This will guide you through an interactive setup process. + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" +10 / Local Disk + \ "local" +11 / Microsoft OneDrive + \ "onedrive" +12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +13 / SSH/SFTP Connection + \ "sftp" +14 / Yandex Disk + \ "yandex" +15 / http Connection + \ "http" +Storage> sftp +SSH host to connect to +Choose a number from below, or type in your own value + 1 / Connect to example.com + \ "example.com" +host> example.com +SSH username, leave blank for current username, ncw +user> sftpuser +SSH port, leave blank to use default (22) +port> +SSH password, leave blank to use ssh-agent. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> n +Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. +key_file> +Remote config +-------------------- +[remote] +host = example.com +user = sftpuser +port = +pass = +key_file = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This remote is called `remote` and can now be used like this + +See all directories in the home directory + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:path/to/directory + +List the contents of a directory + + rclone ls remote:path/to/directory + +Sync `/home/local/directory` to the remote directory, deleting any +excess files in the directory. + + rclone sync /home/local/directory remote:directory + +### SSH Authentication ### + +The SFTP remote supports 3 authentication methods + + * Password + * Key file + * ssh-agent + +Key files should be unencrypted PEM-encoded private key files. For +instance `/home/$USER/.ssh/id_rsa`. + +If you don't specify `pass` or `key_file` then it will attempt to +contact an ssh-agent. + +### ssh-agent on macOS ### + +Note that there seem to be various problems with using an ssh-agent on +macOS due to recent changes in the OS. The most effective work-around +seems to be to start an ssh-agent in each session, eg + + eval `ssh-agent -s` && ssh-add -A + +And then at the end of the session + + eval `ssh-agent -k` + +These commands can be used in scripts of course. + +### Modified time ### + +Modified times are stored on the server to 1 second precision. + +Modified times are used in syncing and are fully supported. + +### Limitations ### + +SFTP supports checksums if the same login has shell access and `md5sum` +or `sha1sum` as well as `echo` are in the remote's PATH. + +The only ssh agent supported under Windows is Putty's pagent. + +SFTP isn't supported under plan9 until [this +issue](https://github.com/pkg/sftp/issues/156) is fixed. + +Note that since SFTP isn't HTTP based the following flags don't work +with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` + +Note that `--timeout` isn't supported (but `--contimeout` is). + +Yandex Disk +---------------------------------------- + +[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com). + +Yandex paths may be as deep as required, eg `remote:directory/subdirectory`. + +Here is an example of making a yandex configuration. First run + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +n/s> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" +10 / Microsoft OneDrive + \ "onedrive" +11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" +12 / SSH/SFTP Connection + \ "sftp" +13 / Yandex Disk + \ "yandex" +Storage> 13 +Yandex Client Id - leave blank normally. +client_id> +Yandex Client Secret - leave blank normally. +client_secret> +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +client_id = +client_secret = +token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Yandex Disk. This only runs from the moment it +opens your browser to the moment you get back the verification code. +This is on `http://127.0.0.1:53682/` and this it may require you to +unblock it temporarily if you are running a host firewall. + +Once configured you can then use `rclone` like this, + +See top level directories + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:directory + +List the contents of a directory + + rclone ls remote:directory + +Sync `/home/local/directory` to the remote path, deleting any +excess files in the path. + + rclone sync /home/local/directory remote:directory + +### --fast-list ### + +This remote supports `--fast-list` which allows you to use fewer +transactions in exchange for more memory. See the [rclone +docs](/docs/#fast-list) for more details. + +### Modified time ### + +Modified times are supported and are stored accurate to 1 ns in custom +metadata called `rclone_modified` in RFC3339 with nanoseconds format. + +### MD5 checksums ### + +MD5 checksums are natively supported by Yandex Disk. + +### Emptying Trash ### + +If you wish to empty your trash you can use the `rclone cleanup remote:` +command which will permanently delete all your trashed files. This command +does not take any path arguments. + Local Filesystem ------------------------------------------- @@ -5997,17 +7157,11 @@ $ rclone -L ls /tmp/a 6 b/one ``` -#### --no-local-unicode-normalization #### +#### --local-no-unicode-normalization #### -By default rclone normalizes (NFC) the unicode representation of filenames and -directories. This flag disables that normalization and uses the same -representation as the local filesystem. - -This can be useful if you need to retain the local unicode representation and -you are using a cloud provider which supports unnormalized names (e.g. S3 or ACD). - -This should also work with any provider if you are using crypt and have file -name encryption (the default) or obfuscation turned on. +This flag is deprecated now. Rclone no longer normalizes unicode file +names, but it compares them with unicode normalization in the sync +routine instead. #### --one-file-system, -x #### @@ -6050,9 +7204,80 @@ filesystem. where it isn't supported (eg Windows) it will not appear as an valid flag. +#### --skip-links #### + +This flag disables warning messages on skipped symlinks or junction +points, as you explicitly acknowledge that they should be skipped. + Changelog --------- + * v1.38 - 2017-09-30 + * New backends + * Azure Blob Storage (thanks Andrei Dragomir) + * Box + * Onedrive for Business (thanks Oliver Heyme) + * QingStor from QingCloud (thanks wuyu) + * New commands + * `rcat` - read from standard input and stream upload + * `tree` - shows a nicely formatted recursive listing + * `cryptdecode` - decode crypted file names (thanks ishuah) + * `config show` - print the config file + * `config file` - print the config file location + * New Features + * Empty directories are deleted on `sync` + * `dedupe` - implement merging of duplicate directories + * `check` and `cryptcheck` made more consistent and use less memory + * `cleanup` for remaining remotes (thanks ishuah) + * `--immutable` for ensuring that files don't change (thanks Jacob McNamee) + * `--user-agent` option (thanks Alex McGrath Kraak) + * `--disable` flag to disable optional features + * `--bind` flag for choosing the local addr on outgoing connections + * Support for zsh auto-completion (thanks bpicode) + * Stop normalizing file names but do a normalized compare in `sync` + * Compile + * Update to using go1.9 as the default go version + * Remove snapd build due to maintenance problems + * Bug Fixes + * Improve retriable error detection which makes multipart uploads better + * Make `check` obey `--ignore-size` + * Fix bwlimit toggle in conjunction with schedules (thanks cbruegg) + * `config` ensures newly written config is on the same mount + * Local + * Revert to copy when moving file across file system boundaries + * `--skip-links` to suppress symlink warnings (thanks Zhiming Wang) + * Mount + * Re-use `rcat` internals to support uploads from all remotes + * Dropbox + * Fix "entry doesn't belong in directory" error + * Stop using deprecated API methods + * Swift + * Fix server side copy to empty container with `--fast-list` + * Google Drive + * Change the default for `--drive-use-trash` to `true` + * S3 + * Set session token when using STS (thanks Girish Ramakrishnan) + * Glacier docs and error messages (thanks Jan Varho) + * Read 1000 (not 1024) items in dir listings to fix Wasabi + * Backblaze B2 + * Fix SHA1 mismatch when downloading files with no SHA1 + * Calculate missing hashes on the fly instead of spooling + * `--b2-hard-delete` to permanently delete (not hide) files (thanks John Papandriopoulos) + * Hubic + * Fix creating containers - no longer have to use the `default` container + * Swift + * Optionally configure from a standard set of OpenStack environment vars + * Add `endpoint_type` config + * Google Cloud Storage + * Fix bucket creation to work with limited permission users + * SFTP + * Implement connection pooling for multiple ssh connections + * Limit new connections per second + * Add support for MD5 and SHA1 hashes where available (thanks Christian Brüggemann) + * HTTP + * Fix URL encoding issues + * Fix directories with `:` in + * Fix panic with URL encoded content * v1.37 - 2017-07-22 * New backends * FTP - thanks to Antonio Messina @@ -6066,7 +7291,7 @@ Changelog * This allows remotes to list recursively if they can * This uses less transactions (important if you pay for them) * This may or may not be quicker - * This will user more memory as it has to hold the listing in memory + * This will use more memory as it has to hold the listing in memory * --old-sync-method deprecated - the remaining uses are covered by --fast-list * This involved a major re-write of all the listing code * Add --tpslimit and --tpslimit-burst to limit transactions per second @@ -6926,6 +8151,21 @@ fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats +### tcp lookup some.domain.com no such host ### + +This happens when rclone cannot resolve a domain. Please check that +your DNS setup is generally working, e.g. + +``` +# both should print a long list of possible IP addresses +dig www.googleapis.com # resolve using your default DNS +dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server +``` + +If you are using `systemd-resolved` (default on Arch Linux), ensure it +is at version 233 or higher. Previous releases contain a bug which +causes not all domains to be resolved properly. + License ------- @@ -7038,6 +8278,22 @@ Contributors * sainaen * gdm85 * Yaroslav Halchenko + * John Papandriopoulos + * Zhiming Wang + * Andy Pilate + * Oliver Heyme + * wuyu + * Andrei Dragomir + * Christian Brüggemann + * Alex McGrath Kraak + * bpicode + * Daniel Jagszent + * Josiah White + * Ishuah Kariuki + * Jan Varho + * Girish Ramakrishnan + * LingMan + * Jacob McNamee # Contact the rclone project # diff --git a/MANUAL.txt b/MANUAL.txt index 5c45cfd02..53b0a1fdb 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Jul 22, 2017 +Sep 30, 2017 @@ -10,21 +10,32 @@ RCLONE [Logo] Rclone is a command line program to sync files and directories to and -from +from: -- Google Drive -- Amazon S3 -- Openstack Swift / Rackspace cloud files / Memset Memstore -- Dropbox -- Google Cloud Storage - Amazon Drive -- Microsoft OneDrive -- Hubic +- Amazon S3 - Backblaze B2 -- Yandex Disk -- SFTP +- Box +- Ceph +- Dreamhost +- Dropbox - FTP +- Google Cloud Storage +- Google Drive - HTTP +- Hubic +- Memset Memstore +- Microsoft Azure Blob Storage +- Microsoft OneDrive +- Minio +- OVH +- Openstack Swift +- Oracle Cloud Storage +- QingStor +- Rackspace Cloud Files +- SFTP +- Wasabi +- Yandex Disk - The local filesystem Features @@ -104,8 +115,12 @@ Unzip the download and cd to the extracted folder. Move rclone to your $PATH. You will be prompted for your password. + sudo mkdir -p /usr/local/bin sudo mv rclone /usr/local/bin/ +(the mkdir command is safe to run, even if the directory already +exists). + Remove the leftover files. cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip @@ -143,62 +158,6 @@ Instructions - rclone -Installation with snap - -Quickstart - -- install Snapd on your distro using the instructions below -- sudo snap install rclone --classic -- Run rclone config to setup. See rclone config docs for more details. - -See below for how to install snapd if it isn't already installed - -Arch - - sudo pacman -S snapd - -enable the snapd systemd service: - - sudo systemctl enable --now snapd.socket - -Debian / Ubuntu - - sudo apt install snapd - -Fedora - - sudo dnf copr enable zyga/snapcore - sudo dnf install snapd - -enable the snapd systemd service: - - sudo systemctl enable --now snapd.service - -SELinux support is in beta, so currently: - - sudo setenforce 0 - -to persist, edit /etc/selinux/config to set SELINUX=permissive and -reboot. - -Gentoo - -Install the gentoo-snappy overlay. - -OpenEmbedded/Yocto - -Install the snap meta layer. - -openSUSE - - sudo zypper addrepo https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy - sudo zypper install snapd - -OpenWrt - -Enable the snap-openwrt feed. - - Configure First, you'll need to configure rclone. As the object storage systems @@ -213,21 +172,24 @@ option: See the following for detailed instructions for -- Google Drive -- Amazon S3 -- Swift / Rackspace Cloudfiles / Memset Memstore -- Dropbox -- Google Cloud Storage -- Local filesystem - Amazon Drive +- Amazon S3 - Backblaze B2 -- Hubic -- Microsoft OneDrive -- Yandex Disk -- SFTP -- FTP -- HTTP +- Box - Crypt - to encrypt other remotes +- Dropbox +- FTP +- Google Cloud Storage +- Google Drive +- HTTP +- Hubic +- Microsoft Azure Blob Storage +- Microsoft OneDrive +- Openstack Swift / Rackspace Cloudfiles / Memset Memstore +- QingStor +- SFTP +- Yandex Disk +- The local filesystem Usage @@ -260,9 +222,21 @@ Enter an interactive configuration session. Synopsis -Enter an interactive configuration session. +rclone config enters an interactive configuration sessions where you can +setup new remotes and manage existing ones. You may also set or remove a +password to protect your configuration. - rclone config +Additional functions: + +- rclone config edit – same as above +- rclone config file – show path of configuration file in use +- rclone config show – print (decrypted) config file + + rclone config [function] [flags] + +Options + + -h, --help help for config rclone copy @@ -309,7 +283,11 @@ source or destination. See the --no-traverse option for controlling whether rclone lists the destination directory or not. - rclone copy source:path dest:path + rclone copy source:path dest:path [flags] + +Options + + -h, --help help for copy rclone sync @@ -337,7 +315,11 @@ extended explanation in the copy command above if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. - rclone sync source:path dest:path + rclone sync source:path dest:path [flags] + +Options + + -h, --help help for sync rclone move @@ -362,7 +344,11 @@ then delete the original (if no errors on copy) in source:path. IMPORTANT: Since this can cause data loss, test first with the --dry-run flag. - rclone move source:path dest:path + rclone move source:path dest:path [flags] + +Options + + -h, --help help for move rclone delete @@ -388,7 +374,11 @@ Then delete That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. - rclone delete remote:path + rclone delete remote:path [flags] + +Options + + -h, --help help for delete rclone purge @@ -401,7 +391,11 @@ Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files. - rclone purge remote:path + rclone purge remote:path [flags] + +Options + + -h, --help help for purge rclone mkdir @@ -412,7 +406,11 @@ Synopsis Make the path if it doesn't already exist. - rclone mkdir remote:path + rclone mkdir remote:path [flags] + +Options + + -h, --help help for mkdir rclone rmdir @@ -424,7 +422,11 @@ Synopsis Remove the path. Note that you can't remove a path with objects in it, use purge for that. - rclone rmdir remote:path + rclone rmdir remote:path [flags] + +Options + + -h, --help help for rmdir rclone check @@ -450,6 +452,7 @@ the data. Options --download Check by downloading rather than with hash. + -h, --help help for check rclone ls @@ -460,7 +463,11 @@ Synopsis List all the objects in the path with size and path. - rclone ls remote:path + rclone ls remote:path [flags] + +Options + + -h, --help help for ls rclone lsd @@ -471,7 +478,11 @@ Synopsis List all directories/containers/buckets in the path. - rclone lsd remote:path + rclone lsd remote:path [flags] + +Options + + -h, --help help for lsd rclone lsl @@ -482,7 +493,11 @@ Synopsis List all the objects path with modification time, size and path. - rclone lsl remote:path + rclone lsl remote:path [flags] + +Options + + -h, --help help for lsl rclone md5sum @@ -494,7 +509,11 @@ Synopsis Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces. - rclone md5sum remote:path + rclone md5sum remote:path [flags] + +Options + + -h, --help help for md5sum rclone sha1sum @@ -506,7 +525,11 @@ Synopsis Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces. - rclone sha1sum remote:path + rclone sha1sum remote:path [flags] + +Options + + -h, --help help for sha1sum rclone size @@ -517,7 +540,11 @@ Synopsis Prints the total size and number of objects in remote:path. - rclone size remote:path + rclone size remote:path [flags] + +Options + + -h, --help help for size rclone version @@ -528,7 +555,11 @@ Synopsis Show the version number. - rclone version + rclone version [flags] + +Options + + -h, --help help for version rclone cleanup @@ -540,7 +571,11 @@ Synopsis Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. - rclone cleanup remote:path + rclone cleanup remote:path [flags] + +Options + + -h, --help help for cleanup rclone dedupe @@ -549,10 +584,14 @@ Interactively find duplicate files delete/rename them. Synopsis -By default dedup interactively finds duplicate files and offers to +By default dedupe interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names. +In the first pass it will merge directories with the same name. It will +do this iteratively until all the identical directories have been +merged. + The dedupe command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive. You can use @@ -636,6 +675,7 @@ Or Options --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive") + -h, --help help for dedupe rclone authorize @@ -647,7 +687,11 @@ Synopsis Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config. - rclone authorize + rclone authorize [flags] + +Options + + -h, --help help for authorize rclone cat @@ -682,6 +726,7 @@ Options --count int Only print N characters. (default -1) --discard Discard the output instead of printing. --head int Only print the first N characters. + -h, --help help for cat --offset int Start printing at offset N (or from end if -ve). --tail int Only print the last N characters. @@ -717,7 +762,11 @@ This will: This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. - rclone copyto source:path dest:path + rclone copyto source:path dest:path [flags] + +Options + + -h, --help help for copyto rclone cryptcheck @@ -749,7 +798,31 @@ files in remote:path. After it has run it will log the status of the encryptedremote:. - rclone cryptcheck remote:path cryptedremote:path + rclone cryptcheck remote:path cryptedremote:path [flags] + +Options + + -h, --help help for cryptcheck + + +rclone cryptdecode + +Cryptdecode returns unencrypted file names. + +Synopsis + +rclone cryptdecode returns unencrypted file names when provided with a +list of encrypted file names. List limit is 10 items. + +use it like this + + rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 + + rclone cryptdecode encryptedremote: encryptedfilename [flags] + +Options + + -h, --help help for cryptdecode rclone dbhashsum @@ -762,11 +835,29 @@ Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum. - rclone dbhashsum remote:path + rclone dbhashsum remote:path [flags] + +Options + + -h, --help help for dbhashsum rclone genautocomplete +Output completion script for a given shell. + +Synopsis + +Generates a shell completion script for rclone. Run with --help to list +the supported shells. + +Options + + -h, --help help for genautocomplete + + +rclone genautocomplete bash + Output bash completion script for rclone. Synopsis @@ -776,7 +867,7 @@ Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg - sudo rclone genautocomplete + sudo rclone genautocomplete bash Logout and login again to use the autocompletion scripts, or source them directly @@ -785,7 +876,38 @@ directly If you supply a command line argument the script will be written there. - rclone genautocomplete [output_file] + rclone genautocomplete bash [output_file] [flags] + +Options + + -h, --help help for bash + + +rclone genautocomplete zsh + +Output zsh completion script for rclone. + +Synopsis + +Generates a zsh autocompletion script for rclone. + +This writes to /usr/share/zsh/vendor-completions/_rclone by default so +will probably need to be run with sudo or as root, eg + + sudo rclone genautocomplete zsh + +Logout and login again to use the autocompletion scripts, or source them +directly + + autoload -U compinit && compinit + +If you supply a command line argument the script will be written there. + + rclone genautocomplete zsh [output_file] [flags] + +Options + + -h, --help help for zsh rclone gendocs @@ -819,6 +941,7 @@ When uses with the -l flag it lists the types too. Options + -h, --help help for listremotes -l, --long Show the type as well as names. @@ -853,6 +976,7 @@ can be processed line by line as each item is written one to a line. Options --hash Include hashes in the output (may take longer). + -h, --help help for lsjson --no-modtime Don't read the modification time (can speed things up). -R, --recursive Recurse into the listing. @@ -891,6 +1015,30 @@ manually with # OS X umount /path/to/local/mount +Installing on Windows + +To run rclone mount on Windows, you will need to download and install +WinFsp. + +WinFsp is an open source Windows File System Proxy which makes it easy +to write user space file systems for Windows. It provides a FUSE +emulation layer which rclone uses combination with cgofuse. Both of +these packages are by Bill Zissimopoulos who was very helpful during the +implementation of rclone mount for Windows. + +Windows caveats + +Note that drives created as Administrator are not visible by other +accounts (including the account that was elevated as Administrator). So +if you start a Windows drive from an Administrative Command Prompt and +then try to access the same drive from Explorer (which does not run as +Administrator), you will not be able to see the new drive. + +The easiest way around this is to start the drive from a normal command +prompt. It is also possible to start a drive from the SYSTEM account +(using the WinFsp.Launcher infrastructure) which creates drives +accessible for everyone on the system. + Limitations This can only write files seqentially, it can only seek when reading. @@ -934,13 +1082,6 @@ rclone instance is running, you can reset the cache like this: kill -SIGHUP $(pidof rclone) -Bugs - -- All the remotes should work for read, but some may not for write - - those which need to know the size in advance won't - eg B2 - - maybe should pass in size as -1 to mean work it out - - Or put in an an upload cache to cache the files on disk first - rclone mount remote:path /path/to/mountpoint [flags] Options @@ -953,6 +1094,7 @@ Options --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for mount --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). @@ -999,7 +1141,11 @@ time or MD5SUM. src will be deleted on successful transfer. IMPORTANT: Since this can cause data loss, test first with the --dry-run flag. - rclone moveto source:path dest:path + rclone moveto source:path dest:path [flags] + +Options + + -h, --help help for moveto rclone ncdu @@ -1032,7 +1178,11 @@ This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands. - rclone ncdu remote:path + rclone ncdu remote:path [flags] + +Options + + -h, --help help for ncdu rclone obscure @@ -1043,7 +1193,46 @@ Synopsis Obscure password for use in the rclone.conf - rclone obscure password + rclone obscure password [flags] + +Options + + -h, --help help for obscure + + +rclone rcat + +Copies standard input to file on remote. + +Synopsis + +rclone rcat reads from standard input (stdin) and copies it to a single +remote file. + + echo "hello world" | rclone rcat remote:path/to/file + ffmpeg - | rclone rcat --checksum remote:path/to/file + +If the remote file already exists, it will be overwritten. + +rcat will try to upload small files in a single request, which is +usually more efficient than the streaming/chunked upload endpoints, +which use multiple requests. Exact behaviour depends on the remote. What +is considered a small file may be set through --streaming-upload-cutoff. +Uploading only starts after the cutoff is reached or if the file ends +before that. The data must fit into RAM. The cutoff needs to be small +enough to adhere the limits of your remote, please see there. Generally +speaking, setting this cutoff too high will decrease your performance. + +Note that the upload can also not be retried because the data is not +kept around until the upload succeeds. If you need to transfer a lot of +data, you're better off caching locally and then rclone move it to the +destination. + + rclone rcat remote:path [flags] + +Options + + -h, --help help for rcat rclone rmdirs @@ -1059,7 +1248,67 @@ it has nothing in. This is useful for tidying up remotes that rclone has left a lot of empty directories in. - rclone rmdirs remote:path + rclone rmdirs remote:path [flags] + +Options + + -h, --help help for rmdirs + + +rclone tree + +List the contents of the remote in a tree like fashion. + +Synopsis + +rclone tree lists the contents of a remote in a similar way to the unix +tree command. + +For example + + $ rclone tree remote:path + / + ├── file1 + ├── file2 + ├── file3 + └── subdir + ├── file4 + └── file5 + + 1 directories, 5 files + +You can use any of the filtering options with the tree command (eg +--include and --exclude). You can also use --fast-list. + +The tree command has many options for controlling the listing which are +compatible with the tree command. Note that not all of them have short +options as they conflict with rclone's short options. + + rclone tree remote:path [flags] + +Options + + -a, --all All files are listed (list . files too). + -C, --color Turn colorization on always. + -d, --dirs-only List directories only. + --dirsfirst List directories before files (-U disables). + --full-path Print the full path prefix for each file. + -h, --help help for tree + --human Print the size in a more human readable way. + --level int Descend only level directories deep. + -D, --modtime Print the date of last modification. + -i, --noindent Don't print indentation lines. + --noreport Turn off file/directory count at end of tree listing. + -o, --output string Output to file instead of stdout. + -p, --protections Print the protections for each file. + -Q, --quote Quote filenames with double quotes. + -s, --size Print the size in bytes of each file. + --sort string Select sort: name,version,size,mtime,ctime. + --sort-ctime Sort files by last status change time. + -t, --sort-modtime Sort files by last modification time. + -r, --sort-reverse Reverse the order of the sort. + -U, --unsorted Leave files unsorted. + --version Sort files alphanumerically by version. Copying single files @@ -1214,6 +1463,13 @@ If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date. +--bind string + +Local address to bind to for outgoing connections. This can be an IPv4 +address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the +host name doesn't resolve or resoves to more than one IP address it will +give an error. + --bwlimit=BANDWIDTH_SPEC This option controls the bandwidth limit. Limits can be specified in two @@ -1323,6 +1579,26 @@ Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean. +--disable FEATURE,FEATURE,... + +This disables a comma separated list of optional features. For example +to disable server side move and server side copy use: + + --disable move,copy + +The features can be put in in any case. + +To see a list of which features can be disabled use: + + --disable help + +See the overview features and optional features to get an idea of which +feature does what. + +This flag can be useful for debugging and in exceptional circumstances +(eg Google Drive limiting the total volume of Server Side Copies to +100GB/day). + -n, --dry-run Do a trial run with no permanent changes. Use this to see what rclone @@ -1370,6 +1646,26 @@ Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum). +--immutable + +Treat source and destination files as immutable and disallow +modification. + +With this option set, files will be created and deleted as requested, +but existing files will never be updated. If an existing file does not +match between the source and destination, rclone will give the error +Source and destination exist but do not match: immutable file modified. + +Note that only commands which transfer files (e.g. sync, copy, move) are +affected by this behavior, and only modification is disallowed. Files +may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. +sync, move). Use copy --immutable if it is desired to avoid deletion as +well as modification. + +This can be useful as an additional layer of protection for immutable or +append-only data sets (notably backup archives), where modification +implies corruption and should not be propagated. + --log-file=FILE Log all of rclone's output to FILE. This is not active by default. This @@ -1796,6 +2092,9 @@ be very verbose. Useful for debugging only. Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only. +Note that the bodies are buffered in memory so don't use this for +enormous files. + --dump-filters Dump the filters to the output. Useful to see exactly what include and @@ -2319,11 +2618,13 @@ processed in. Prepare a file like this filter-file.txt - # a sample exclude rule file + # a sample filter rule file - secret*.jpg + *.jpg + *.png + file2.avi + - /dir/Trash/** + + /dir/** # exclude everything else - * @@ -2331,8 +2632,10 @@ Then use as --filter-from filter-file.txt. The rules are processed in the order that they are defined. This example will include all jpg and png files, exclude any files -matching secret*.jpg and include file2.avi. Everything else will be -excluded from the sync. +matching secret*.jpg and include file2.avi. It will also include +everything in the directory dir at the root of the sync, except +dir/Trash which it will exclude. Everything else will be excluded from +the sync. --files-from - Read list of source-file names @@ -2485,36 +2788,42 @@ Features Here is an overview of the major features of each cloud storage system. - Name Hash ModTime Case Insensitive Duplicate Files MIME Type - ---------------------- ---------- --------- ------------------ ----------------- ----------- - Google Drive MD5 Yes No Yes R/W - Amazon S3 MD5 Yes No No R/W - Openstack Swift MD5 Yes No No R/W - Dropbox DBHASH † Yes Yes No - - Google Cloud Storage MD5 Yes No No R/W - Amazon Drive MD5 No Yes No R - Microsoft OneDrive SHA1 Yes Yes No R - Hubic MD5 Yes No No R/W - Backblaze B2 SHA1 Yes No No R/W - Yandex Disk MD5 Yes No No R/W - SFTP - Yes Depends No - - FTP - No Yes No - - HTTP - No Yes No R - The local filesystem All Yes Depends No - + Name Hash ModTime Case Insensitive Duplicate Files MIME Type + ------------------------------ ------------- --------- ------------------ ----------------- ----------- + Amazon Drive MD5 No Yes No R + Amazon S3 MD5 Yes No No R/W + Backblaze B2 SHA1 Yes No No R/W + Box SHA1 Yes Yes No - + Dropbox DBHASH † Yes Yes No - + FTP - No No No - + Google Cloud Storage MD5 Yes No No R/W + Google Drive MD5 Yes No Yes R/W + HTTP - No No No R + Hubic MD5 Yes No No R/W + Microsoft Azure Blob Storage MD5 Yes No No R/W + Microsoft OneDrive SHA1 Yes Yes No R + Openstack Swift MD5 Yes No No R/W + QingStor MD5 No No No R/W + SFTP MD5, SHA1 ‡ Yes Depends No - + Yandex Disk MD5 Yes No No R/W + The local filesystem All Yes Depends No - Hash -The cloud storage system supports various hash types of the objects. -The hashes are used when transferring data as an integrity check and can -be specifically used with the --checksum flag in syncs and in the check +The cloud storage system supports various hash types of the objects. The +hashes are used when transferring data as an integrity check and can be +specifically used with the --checksum flag in syncs and in the check command. -To use the checksum checks between filesystems they must support a -common hash type. +To use the verify checksums when transferring between cloud storage +systems they must support a common hash type. † Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s. +‡ SFTP supports checksums if the same login has shell access and md5sum +or sha1sum as well as echo are in the remote's PATH. + ModTime The cloud storage system supports setting modification times on objects. @@ -2578,22 +2887,25 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. - Name Purge Copy Move DirMove CleanUp ListR - ---------------------- ------- ------ ------ --------- --------- ------- - Google Drive Yes Yes Yes Yes No #575 No - Amazon S3 No Yes No No No Yes - Openstack Swift Yes † Yes No No No Yes - Dropbox Yes Yes Yes Yes No #575 No - Google Cloud Storage Yes Yes No No No Yes - Amazon Drive Yes No Yes Yes No #575 No - Microsoft OneDrive Yes Yes Yes No #197 No #575 No - Hubic Yes † Yes No No No Yes - Backblaze B2 No No No No Yes Yes - Yandex Disk Yes No No No No #575 Yes - SFTP No No Yes Yes No No - FTP No No Yes Yes No No - HTTP No No No No No No - The local filesystem Yes No Yes Yes No No + Name Purge Copy Move DirMove CleanUp ListR StreamUpload + ------------------------------ ------- ------ ------ --------- --------- ------- -------------- + Amazon Drive Yes No Yes Yes No #575 No No + Amazon S3 No Yes No No No Yes Yes + Backblaze B2 No No No No Yes Yes Yes + Box Yes Yes Yes Yes No #575 No Yes + Dropbox Yes Yes Yes Yes No #575 No Yes + FTP No No Yes Yes No No Yes + Google Cloud Storage Yes Yes No No No Yes Yes + Google Drive Yes Yes Yes Yes Yes No Yes + HTTP No No No No No No No + Hubic Yes † Yes No No No Yes Yes + Microsoft Azure Blob Storage Yes Yes No No No Yes No + Microsoft OneDrive Yes Yes Yes No #197 No #575 No No + Openstack Swift Yes † Yes No No No Yes Yes + QingStor No Yes No No No Yes No + SFTP No No Yes Yes No No Yes + Yandex Disk Yes No No No Yes Yes Yes + The local filesystem Yes No Yes Yes No No Yes Purge @@ -2642,17 +2954,39 @@ The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details. +StreamUpload -Google Drive +Some remotes allow files to be uploaded without knowing the file size in +advance. This allows certain operations to work without spooling the +file to local disk first, e.g. rclone rcat. -Paths are specified as drive:path -Drive paths may be as deep as required, eg drive:directory/subdirectory. +Amazon Drive -The initial setup for drive involves getting a token from Google drive +Paths are specified as remote:path + +Paths may be as deep as required, eg remote:directory/subdirectory. + +The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it. +The configuration process for Amazon Drive may involve using an oauth +proxy. This is used to keep the Amazon credentials out of the source +code. The proxy runs in Google's very secure App Engine environment and +doesn't store any credentials which pass through it. + +NB rclone doesn't not currently have its own Amazon Drive credentials +(see the forum for why) so you will either need to have your own +client_id and client_secret with Amazon Drive, or use a a third party +ouath proxy in which case you will need to enter client_id, +client_secret, auth_url and token_url. + +Note also if you are not using Amazon's auth_url and token_url, (ie you +filled in something for those) then if setting up on a remote machine +you can only use the copying the config method of configuration - +rclone authorize will not work. + Here is an example of how to make a remote called remote. First run: rclone config @@ -2697,15 +3031,20 @@ This will guide you through an interactive setup process: \ "sftp" 14 / Yandex Disk \ "yandex" - Storage> 8 - Google Application Client Id - leave blank normally. - client_id> - Google Application Client Secret - leave blank normally. - client_secret> + Storage> 1 + Amazon Application Client Id - required. + client_id> your client ID goes here + Amazon Application Client Secret - required. + client_secret> your client secret goes here + Auth server URL - leave blank to use Amazon's. + auth_url> Optional auth URL + Token server url - leave blank to use Amazon's. + token_url> Optional token URL Remote config + Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config? * Say Y if not sure - * Say N if you are working on a remote or headless machine or Y didn't work + * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y @@ -2713,340 +3052,122 @@ This will guide you through an interactive setup process: Log in and authorize rclone for access Waiting for code... Got code - Configure this as a team drive? - y) Yes - n) No - y/n> n -------------------- [remote] - client_id = - client_secret = - token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} + client_id = your client ID goes here + client_secret = your client secret goes here + auth_url = Optional auth URL + token_url = Optional token URL + token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if you use auto config mode. This only -runs from the moment it opens your browser to the moment you get back -the verification code. This is on http://127.0.0.1:53682/ and this it -may require you to unblock it temporarily if you are running a host -firewall, or use manual mode. +token as returned from Amazon. This only runs from the moment it opens +your browser to the moment you get back the verification code. This is +on http://127.0.0.1:53682/ and this it may require you to unblock it +temporarily if you are running a host firewall. -You can then use it like this, +Once configured you can then use rclone like this, -List directories in top level of your drive +List directories in top level of your Amazon Drive rclone lsd remote: -List all the files in your drive +List all the files in your Amazon Drive rclone ls remote: -To copy a local directory to a drive directory called backup +To copy a local directory to an Amazon Drive directory called backup rclone copy /home/source remote:backup -Team drives +Modified time and MD5SUMs -If you want to configure the remote to point to a Google Team Drive then -answer y to the question Configure this as a team drive?. +Amazon Drive doesn't allow modification times to be changed via the API +so these won't be accurate or used for syncing. -This will fetch the list of Team Drives from google and allow you to -configure which one you want to use. You can also type in a team drive -ID if you prefer. - -For example: - - Configure this as a team drive? - y) Yes - n) No - y/n> y - Fetching team drive list... - Choose a number from below, or type in your own value - 1 / Rclone Test - \ "xxxxxxxxxxxxxxxxxxxx" - 2 / Rclone Test 2 - \ "yyyyyyyyyyyyyyyyyyyy" - 3 / Rclone Test 3 - \ "zzzzzzzzzzzzzzzzzzzz" - Enter a Team Drive ID> 1 - -------------------- - [remote] - client_id = - client_secret = - token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} - team_drive = xxxxxxxxxxxxxxxxxxxx - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -Modified time - -Google drive stores modification times accurate to 1 ms. - -Revisions - -Google drive stores revisions of files. When you upload a change to an -existing file to google drive using rclone it will create a new revision -of that file. - -Revisions follow the standard google policy which at time of writing was - -- They are deleted after 30 days or 100 revisions (whatever - comes first). -- They do not count towards a user storage quota. +It does store MD5SUMs so for a more accurate sync, you can use the +--checksum flag. Deleting files -By default rclone will delete files permanently when requested. If -sending them to the trash is required instead then use the ---drive-use-trash flag. +Any files you delete with rclone will end up in the trash. Amazon don't +provide an API to permanently delete files, nor to empty the trash, so +you will have to do that with one of Amazon's apps or via the Amazon +Drive website. As of November 17, 2016, files are automatically deleted +by Amazon from the trash after 30 days. + +Using with non .com Amazon accounts + +Let's say you usually use amazon.co.uk. When you authenticate with +rclone it will take you to an amazon.com page to log in. Your +amazon.co.uk email and password should work here just fine. Specific options Here are the command line options specific to this cloud storage system. ---drive-auth-owner-only +--acd-templink-threshold=SIZE -Only consider files owned by the authenticated user. +Files this size or more will be downloaded via their tempLink. This is +to work around a problem with Amazon Drive which blocks downloads of +files bigger than about 10GB. The default for this is 9GB which +shouldn't need to be changed. ---drive-chunk-size=SIZE +To download files above this threshold, rclone requests a tempLink which +downloads the file through a temporary URL directly from the underlying +S3 storage. -Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB. +--acd-upload-wait-per-gb=TIME -Making this larger will improve performance, but note that each chunk is -buffered in memory one per transfer. +Sometimes Amazon Drive gives an error when a file has been fully +uploaded but the file appears anyway after a little while. This happens +sometimes for files over 1GB in size and nearly every time for files +bigger than 10GB. This parameter controls the time rclone waits for the +file to appear. -Reducing this will reduce memory usage but decrease performance. +The default value for this parameter is 3 minutes per GB, so by default +it will wait 3 minutes for every GB uploaded to see if the file appears. ---drive-auth-owner-only +You can disable this feature by setting it to 0. This may cause conflict +errors as rclone retries the failed upload but the file will most likely +appear correctly eventually. -Only consider files owned by the authenticated user. +These values were determined empirically by observing lots of uploads of +big files for a range of file sizes. ---drive-formats - -Google documents can only be exported from Google drive. When rclone -downloads a Google doc it chooses a format to download depending upon -this setting. - -By default the formats are docx,xlsx,pptx,svg which are a sensible -default for an editable document. - -When choosing a format, rclone runs down the list provided in order and -chooses the first file format the doc can be exported as from the list. -If the file can't be exported to a format on the formats list, then -rclone will choose a format from the default list. - -If you prefer an archive copy then you might use --drive-formats pdf, or -if you prefer openoffice/libreoffice formats you might use ---drive-formats ods,odt,odp. - -Note that rclone adds the extension to the google doc, so if it is -calles My Spreadsheet on google docs, it will be exported as -My Spreadsheet.xlsx or My Spreadsheet.pdf etc. - -Here are the possible extensions with their corresponding mime types. - - ------------------------------------- - Extension Mime Type Description - ---------- ------------ ------------- - csv text/csv Standard CSV - format for - Spreadsheets - - doc application/ Micosoft - msword Office - Document - - docx application/ Microsoft - vnd.openxmlf Office - ormats-offic Document - edocument.wo - rdprocessing - ml.document - - epub application/ E-book format - epub+zip - - html text/html An HTML - Document - - jpg image/jpeg A JPEG Image - File - - odp application/ Openoffice - vnd.oasis.op Presentation - endocument.p - resentation - - ods application/ Openoffice - vnd.oasis.op Spreadsheet - endocument.s - preadsheet - - ods application/ Openoffice - x-vnd.oasis. Spreadsheet - opendocument - .spreadsheet - - odt application/ Openoffice - vnd.oasis.op Document - endocument.t - ext - - pdf application/ Adobe PDF - pdf Format - - png image/png PNG Image - Format - - pptx application/ Microsoft - vnd.openxmlf Office - ormats-offic Powerpoint - edocument.pr - esentationml - .presentatio - n - - rtf application/ Rich Text - rtf Format - - svg image/svg+xm Scalable - l Vector - Graphics - Format - - tsv text/tab-sep Standard TSV - arated-value format for - s spreadsheets - - txt text/plain Plain Text - - xls application/ Microsoft - vnd.ms-excel Office - Spreadsheet - - xlsx application/ Microsoft - vnd.openxmlf Office - ormats-offic Spreadsheet - edocument.sp - readsheetml. - sheet - - zip application/ A ZIP file of - zip HTML, Images - CSS - ------------------------------------- - ---drive-list-chunk int - -Size of listing chunk 100-1000. 0 to disable. (default 1000) - ---drive-shared-with-me - -Only show files that are shared with me - ---drive-skip-gdocs - -Skip google documents in all listings. If given, gdocs practically -become invisible to rclone. - ---drive-trashed-only - -Only show files that are in the trash. This will show trashed files in -their original directory structure. - ---drive-upload-cutoff=SIZE - -File size cutoff for switching to chunked upload. Default is 8 MB. - ---drive-use-trash - -Send files to the trash instead of deleting permanently. Defaults to -off, namely deleting files permanently. +Upload with the -v flag to see more info about what rclone is doing in +this situation. Limitations -Drive has quite a lot of rate limiting. This causes rclone to be limited -to transferring about 2 files per second only. Individual files may be -transferred much faster at 100s of MBytes/s but lots of small files can -take a long time. +Note that Amazon Drive is case insensitive so you can't have a file +called "Hello.doc" and one called "hello.doc". -Duplicated files +Amazon Drive has rate limiting so you may notice errors in the sync (429 +errors). rclone will automatically retry the sync up to 3 times by +default (see --retries flag) which should hopefully work around this +problem. -Sometimes, for no reason I've been able to track down, drive will -duplicate a file that rclone uploads. Drive unlike all the other remotes -can have duplicated files. +Amazon Drive has an internal limit of file sizes that can be uploaded to +the service. This limit is not officially published, but all files +larger than this will fail. -Duplicated files cause problems with the syncing and you will see -messages in the log about duplicates. +At the time of writing (Jan 2016) is in the area of 50GB per file. This +means that larger files are likely to fail. -Use rclone dedupe to fix duplicated files. - -Note that this isn't just a problem with rclone, even Google Photos on -Android duplicates files on drive sometimes. - -Rclone appears to be re-copying files it shouldn't - -There are two possible reasons for rclone to recopy files which haven't -changed to Google Drive. - -The first is the duplicated file issue above - run rclone dedupe and -check your logs for duplicate object or directory messages. - -The second is that sometimes Google reports different sizes for the -Google Docs exports which will cause rclone to re-download Google Docs -for no apparent reason. --ignore-size is a not very satisfactory -work-around for this if it is causing you a lot of problems. - -Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y" - -This is the same problem as above. Google reports the google doc is one -size, but rclone downloads a different size. Work-around with the ---ignore-size flag or wait for rclone to retry the download which it -will. - -Making your own client_id - -When you use rclone with Google drive in its default configuration you -are using rclone's client_id. This is shared between all the rclone -users. There is a global rate limit on the number of queries per second -that each client_id can do set by Google. rclone already has a high -quota and I will continue to make sure it is high enough by contacting -Google. - -However you might find you get better performance making your own -client_id if you are a heavy user. Or you may not depending on exactly -how Google have been raising rclone's rate limit. - -Here is how to create your own Google Drive client ID for rclone: - -1. Log into the Google API Console with your Google account. It doesn't - matter what Google account you use. (It need not be the same account - as the Google Drive you want to access) - -2. Select a project or create a new project. - -3. Under Overview, Google APIs, Google Apps APIs, click "Drive API", - then "Enable". - -4. Click "Credentials" in the left-side panel (not "Go to credentials", - which opens the wizard), then "Create credentials", then "OAuth - client ID". It will prompt you to set the OAuth consent screen - product name, if you haven't set one already. - -5. Choose an application type of "other", and click "Create". (the - default name is fine) - -6. It will show you a client ID and client secret. Use these values in - rclone config to add a new remote or edit an existing remote. - -(Thanks to @balazer on github for these instructions.) +Unfortunately there is no way for rclone to see that this failure is +because of file size, so it will retry the operation, as any other +failure. To avoid this problem, use --max-size 50000M option to limit +the maximum size of uploaded files. Note that --max-size does not split +files into segments, it only ignores files over this size. Amazon S3 @@ -3283,12 +3404,14 @@ order of precedence: - Directly in the rclone configuration file (as configured by rclone config) -- set access_key_id and secret_access_key +- set access_key_id and secret_access_key. session_token can be + optionally set when using AWS STS. - Runtime configuration: - set env_auth to true in the config file - Exporting the following environment variables before running rclone - Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY - Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY + - Session Token: AWS_SESSION_TOKEN - Running rclone on an EC2 instance with an IAM role If none of these option actually end up providing rclone with AWS @@ -3340,6 +3463,17 @@ Notes on above: For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync. +Glacier + +You can transition objects to glacier storage using a lifecycle policy. +The bucket can still be synced or copied into normally, but if rclone +tries to access the data you will see an error like below. + + 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file + +In this case you need to restore the object(s) in question before using +rclone. + Specific options Here are the command line options specific to this cloud storage system. @@ -3498,1027 +3632,113 @@ So once set up, for example to copy files into a bucket rclone copy /path/to/files minio:bucket +Wasabi -Swift +Wasabi is a cloud-based object storage service for a broad range of +applications and use cases. Wasabi is designed for individuals and +organizations that require a high-performance, reliable, and secure data +storage infrastructure at minimal cost. -Swift refers to Openstack Object Storage. Commercial implementations of -that being: - -- Rackspace Cloud Files -- Memset Memstore - -Paths are specified as remote:container (or remote: for the lsd -command.) You may put subdirectories in too, eg -remote:container/path/to/dir. - -Here is an example of making a swift configuration. First run - - rclone config - -This will guide you through an interactive setup process. +Wasabi provides an S3 interface which can be configured for use with +rclone like this. No remotes found - make a new one n) New remote s) Set configuration password n/s> n - name> remote + name> wasabi Type of storage to configure. Choose a number from below, or type in your own value 1 / Amazon Drive \ "amazon cloud drive" 2 / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 11 - User name to log in. - user> user_name - API key or password. - key> password_or_api_key - Authentication URL for server. + [snip] + Storage> s3 + Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value - 1 / Rackspace US - \ "https://auth.api.rackspacecloud.com/v1.0" - 2 / Rackspace UK - \ "https://lon.auth.api.rackspacecloud.com/v1.0" - 3 / Rackspace v2 - \ "https://identity.api.rackspacecloud.com/v2.0" - 4 / Memset Memstore UK - \ "https://auth.storage.memset.com/v1.0" - 5 / Memset Memstore UK v2 - \ "https://auth.storage.memset.com/v2.0" - 6 / OVH - \ "https://auth.cloud.ovh.net/v2.0" - auth> 1 - User domain - optional (v3 auth) - domain> Default - Tenant name - optional for v1 auth, required otherwise - tenant> tenant_name - Tenant domain - optional (v3 auth) - tenant_domain> - Region name - optional - region> - Storage URL - optional - storage_url> - AuthVersion - optional - set to (1,2,3) if your auth URL has no version - auth_version> - Remote config - -------------------- - [remote] - user = user_name - key = password_or_api_key - auth = https://auth.api.rackspacecloud.com/v1.0 - domain = Default - tenant = - tenant_domain = - region = - storage_url = - auth_version = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -This remote is called remote and can now be used like this - -See all containers - - rclone lsd remote: - -Make a new container - - rclone mkdir remote:container - -List the contents of a container - - rclone ls remote:container - -Sync /home/local/directory to the remote container, deleting any excess -files in the container. - - rclone sync /home/local/directory remote:container - -Configuration from an Openstack credentials file - -An Opentstack credentials file typically looks something something like -this (without the comments) - - export OS_AUTH_URL=https://a.provider.net/v2.0 - export OS_TENANT_ID=ffffffffffffffffffffffffffffffff - export OS_TENANT_NAME="1234567890123456" - export OS_USERNAME="123abc567xy" - echo "Please enter your OpenStack Password: " - read -sr OS_PASSWORD_INPUT - export OS_PASSWORD=$OS_PASSWORD_INPUT - export OS_REGION_NAME="SBG1" - if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi - -The config file needs to look something like this where $OS_USERNAME -represents the value of the OS_USERNAME variable - 123abc567xy in the -example above. - - [remote] - type = swift - user = $OS_USERNAME - key = $OS_PASSWORD - auth = $OS_AUTH_URL - tenant = $OS_TENANT_NAME - -Note that you may (or may not) need to set region too - try without -first. - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - -Specific options - -Here are the command line options specific to this cloud storage system. - ---swift-chunk-size=SIZE - -Above this size files will be chunked into a _segments container. The -default for this is 5GB which is its maximum value. - -Modified time - -The modified time is stored as metadata on the object as -X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. - -This is a defacto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. - -Limitations - -The Swift API doesn't return a correct MD5SUM for segmented files -(Dynamic or Static Large Objects) so rclone won't check or use the -MD5SUM for these. - -Troubleshooting - -Rclone gives Failed to create file system for "remote:": Bad Request - -Due to an oddity of the underlying swift library, it gives a "Bad -Request" error rather than a more sensible error when the authentication -fails for Swift. - -So this most likely means your username / password is wrong. You can -investigate further with the --dump-bodies flag. - -This may also be caused by specifying the region when you shouldn't have -(eg OVH). - -Rclone gives Failed to create file system: Response didn't have storage storage url and auth token - -This is most likely caused by forgetting to specify your tenant when -setting up a swift remote. - - -Dropbox - -Paths are specified as remote:path - -Dropbox paths may be as deep as required, eg -remote:directory/subdirectory. - -The initial setup for dropbox involves getting a token from Dropbox -which you need to do in your browser. rclone config walks you through -it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - n) New remote - d) Delete remote - q) Quit config - e/n/d/q> n - name> remote - Type of storage to configure. + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID - leave blank for anonymous access or runtime credentials. + access_key_id> YOURACCESSKEY + AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. + secret_access_key> YOURSECRETACCESSKEY + Region to connect to. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 4 - Dropbox App Key - leave blank normally. - app_key> - Dropbox App Secret - leave blank normally. - app_secret> - Remote config - Please visit: - https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code - Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX - -------------------- - [remote] - app_key = - app_secret = - token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -You can then use it like this, - -List directories in top level of your dropbox - - rclone lsd remote: - -List all the files in your dropbox - - rclone ls remote: - -To copy a local directory to a dropbox directory called backup - - rclone copy /home/source remote:backup - -Modified time and Hashes - -Dropbox supports modified times, but the only way to set a modification -time is to re-upload the file. - -This means that if you uploaded your data with an older version of -rclone which didn't support the v2 API and modified times, rclone will -decide to upload all your old data to fix the modification times. If you -don't want this to happen use --size-only or --checksum flag to stop it. - -Dropbox supports its own hash type which is checked for all transfers. - -Specific options - -Here are the command line options specific to this cloud storage system. - ---dropbox-chunk-size=SIZE - -Upload chunk size. Max 150M. The default is 128MB. Note that this isn't -buffered into memory. - -Limitations - -Note that Dropbox is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". - -There are some file names such as thumbs.db which Dropbox can't store. -There is a full list of them in the "Ignored Files" section of this -document. Rclone will issue an error message -File name disallowed - not uploading if it attempt to upload one of -those file names, but the sync won't fail. - -If you have more than 10,000 files in a directory then -rclone purge dropbox:dir will return the error -Failed to purge: There are too many files involved in this operation. As -a work-around do an rclone delete dropbox:dir followed by an -rclone rmdir dropbox:dir. - - -Google Cloud Storage - -Paths are specified as remote:bucket (or remote: for the lsd command.) -You may put subdirectories in too, eg remote:bucket/path/to/dir. - -The initial setup for google cloud storage involves getting a token from -Google Cloud Storage which you need to do in your browser. rclone config -walks you through it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - n) New remote - d) Delete remote - q) Quit config - e/n/d/q> n - name> remote - Type of storage to configure. + / The default endpoint - a good choice if you are unsure. + 1 | US Region, Northern Virginia or Pacific Northwest. + | Leave location constraint empty. + \ "us-east-1" + [snip] + region> us-east-1 + Endpoint for S3 API. + Leave blank if using AWS to use the default endpoint for the region. + Specify if using an S3 clone such as Ceph. + endpoint> s3.wasabisys.com + Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 6 - Google Application Client Id - leave blank normally. - client_id> - Google Application Client Secret - leave blank normally. - client_secret> - Project number optional - needed only for list/create/delete buckets - see your developer console. - project_number> 12345678 - Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. - service_account_file> - Access Control List for new objects. - Choose a number from below, or type in your own value - 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. - \ "authenticatedRead" - 2 / Object owner gets OWNER access, and project team owners get OWNER access. - \ "bucketOwnerFullControl" - 3 / Object owner gets OWNER access, and project team owners get READER access. - \ "bucketOwnerRead" - 4 / Object owner gets OWNER access [default if left blank]. - \ "private" - 5 / Object owner gets OWNER access, and project team members get access according to their roles. - \ "projectPrivate" - 6 / Object owner gets OWNER access, and all Users get READER access. - \ "publicRead" - object_acl> 4 - Access Control List for new buckets. - Choose a number from below, or type in your own value - 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. - \ "authenticatedRead" - 2 / Project team owners get OWNER access [default if left blank]. - \ "private" - 3 / Project team members get access according to their roles. - \ "projectPrivate" - 4 / Project team owners get OWNER access, and all Users get READER access. - \ "publicRead" - 5 / Project team owners get OWNER access, and all Users get WRITER access. - \ "publicReadWrite" - bucket_acl> 2 - Location for the newly created buckets. - Choose a number from below, or type in your own value - 1 / Empty for default location (US). + 1 / Empty for US Region, Northern Virginia or Pacific Northwest. \ "" - 2 / Multi-regional location for Asia. - \ "asia" - 3 / Multi-regional location for Europe. - \ "eu" - 4 / Multi-regional location for United States. - \ "us" - 5 / Taiwan. - \ "asia-east1" - 6 / Tokyo. - \ "asia-northeast1" - 7 / Singapore. - \ "asia-southeast1" - 8 / Sydney. - \ "australia-southeast1" - 9 / Belgium. - \ "europe-west1" - 10 / London. - \ "europe-west2" - 11 / Iowa. - \ "us-central1" - 12 / South Carolina. - \ "us-east1" - 13 / Northern Virginia. - \ "us-east4" - 14 / Oregon. - \ "us-west1" - location> 12 - The storage class to use when storing objects in Google Cloud Storage. + [snip] + location_constraint> + Canned ACL used when creating buckets and/or storing objects in S3. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). + \ "private" + [snip] + acl> + The server-side encryption algorithm used when storing this object in S3. + Choose a number from below, or type in your own value + 1 / None + \ "" + 2 / AES256 + \ "AES256" + server_side_encryption> + The storage class to use when storing objects in S3. Choose a number from below, or type in your own value 1 / Default \ "" - 2 / Multi-regional storage class - \ "MULTI_REGIONAL" - 3 / Regional storage class - \ "REGIONAL" - 4 / Nearline storage class - \ "NEARLINE" - 5 / Coldline storage class - \ "COLDLINE" - 6 / Durable reduced availability storage class - \ "DURABLE_REDUCED_AVAILABILITY" - storage_class> 5 + 2 / Standard storage class + \ "STANDARD" + 3 / Reduced redundancy storage class + \ "REDUCED_REDUNDANCY" + 4 / Standard Infrequent Access storage class + \ "STANDARD_IA" + storage_class> Remote config - Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine or Y didn't work - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code -------------------- - [remote] - type = google cloud storage - client_id = - client_secret = - token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} - project_number = 12345678 - object_acl = private - bucket_acl = private + [wasabi] + env_auth = false + access_key_id = YOURACCESSKEY + secret_access_key = YOURSECRETACCESSKEY + region = us-east-1 + endpoint = s3.wasabisys.com + location_constraint = + acl = + server_side_encryption = + storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if you use auto config mode. This only -runs from the moment it opens your browser to the moment you get back -the verification code. This is on http://127.0.0.1:53682/ and this it -may require you to unblock it temporarily if you are running a host -firewall, or use manual mode. - -This remote is called remote and can now be used like this - -See all the buckets in your project - - rclone lsd remote: - -Make a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync /home/local/directory to the remote bucket, deleting any excess -files in the bucket. - - rclone sync /home/local/directory remote:bucket - -Service Account support - -You can set up rclone with Google Cloud Storage in an unattended mode, -i.e. not tied to a specific end-user Google account. This is useful when -you want to synchronise files onto machines that don't have actively -logged-in users, for example build machines. - -To get credentials for Google Cloud Platform IAM Service Accounts, -please head to the Service Account section of the Google Developer -Console. Service Accounts behave just like normal User permissions in -Google Cloud Storage ACLs, so you can limit their access (e.g. make them -read only). After creating an account, a JSON file containing the -Service Account's credentials will be downloaded onto your machines. -These credentials are what rclone will use for authentication. - -To use a Service Account instead of OAuth2 token flow, enter the path to -your Service Account credentials at the service_account_file prompt and -rclone won't use the browser based authentication flow. - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - -Modified time - -Google google cloud storage stores md5sums natively and rclone stores -modification times as metadata on the object, under the "mtime" key in -RFC3339 format accurate to 1ns. - - -Amazon Drive - -Paths are specified as remote:path - -Paths may be as deep as required, eg remote:directory/subdirectory. - -The initial setup for Amazon Drive involves getting a token from Amazon -which you need to do in your browser. rclone config walks you through -it. - -The configuration process for Amazon Drive may involve using an oauth -proxy. This is used to keep the Amazon credentials out of the source -code. The proxy runs in Google's very secure App Engine environment and -doesn't store any credentials which pass through it. - -NB rclone doesn't not currently have its own Amazon Drive credentials -(see the forum for why) so you will either need to have your own -client_id and client_secret with Amazon Drive, or use a a third party -ouath proxy in which case you will need to enter client_id, -client_secret, auth_url and token_url. - -Note also if you are not using Amazon's auth_url and token_url, (ie you -filled in something for those) then if setting up on a remote machine -you can only use the copying the config method of configuration - -rclone authorize will not work. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found - make a new one - n) New remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - n/r/c/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" - 10 / Local Disk - \ "local" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / SSH/SFTP Connection - \ "sftp" - 14 / Yandex Disk - \ "yandex" - Storage> 1 - Amazon Application Client Id - required. - client_id> your client ID goes here - Amazon Application Client Secret - required. - client_secret> your client secret goes here - Auth server URL - leave blank to use Amazon's. - auth_url> Optional auth URL - Token server url - leave blank to use Amazon's. - token_url> Optional token URL - Remote config - Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. - Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - client_id = your client ID goes here - client_secret = your client secret goes here - auth_url = Optional auth URL - token_url = Optional token URL - token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Amazon. This only runs from the moment it opens -your browser to the moment you get back the verification code. This is -on http://127.0.0.1:53682/ and this it may require you to unblock it -temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -List directories in top level of your Amazon Drive - - rclone lsd remote: - -List all the files in your Amazon Drive - - rclone ls remote: - -To copy a local directory to an Amazon Drive directory called backup - - rclone copy /home/source remote:backup - -Modified time and MD5SUMs - -Amazon Drive doesn't allow modification times to be changed via the API -so these won't be accurate or used for syncing. - -It does store MD5SUMs so for a more accurate sync, you can use the ---checksum flag. - -Deleting files - -Any files you delete with rclone will end up in the trash. Amazon don't -provide an API to permanently delete files, nor to empty the trash, so -you will have to do that with one of Amazon's apps or via the Amazon -Drive website. As of November 17, 2016, files are automatically deleted -by Amazon from the trash after 30 days. - -Using with non .com Amazon accounts - -Let's say you usually use amazon.co.uk. When you authenticate with -rclone it will take you to an amazon.com page to log in. Your -amazon.co.uk email and password should work here just fine. - -Specific options - -Here are the command line options specific to this cloud storage system. - ---acd-templink-threshold=SIZE - -Files this size or more will be downloaded via their tempLink. This is -to work around a problem with Amazon Drive which blocks downloads of -files bigger than about 10GB. The default for this is 9GB which -shouldn't need to be changed. - -To download files above this threshold, rclone requests a tempLink which -downloads the file through a temporary URL directly from the underlying -S3 storage. - ---acd-upload-wait-per-gb=TIME - -Sometimes Amazon Drive gives an error when a file has been fully -uploaded but the file appears anyway after a little while. This happens -sometimes for files over 1GB in size and nearly every time for files -bigger than 10GB. This parameter controls the time rclone waits for the -file to appear. - -The default value for this parameter is 3 minutes per GB, so by default -it will wait 3 minutes for every GB uploaded to see if the file appears. - -You can disable this feature by setting it to 0. This may cause conflict -errors as rclone retries the failed upload but the file will most likely -appear correctly eventually. - -These values were determined empirically by observing lots of uploads of -big files for a range of file sizes. - -Upload with the -v flag to see more info about what rclone is doing in -this situation. - -Limitations - -Note that Amazon Drive is case insensitive so you can't have a file -called "Hello.doc" and one called "hello.doc". - -Amazon Drive has rate limiting so you may notice errors in the sync (429 -errors). rclone will automatically retry the sync up to 3 times by -default (see --retries flag) which should hopefully work around this -problem. - -Amazon Drive has an internal limit of file sizes that can be uploaded to -the service. This limit is not officially published, but all files -larger than this will fail. - -At the time of writing (Jan 2016) is in the area of 50GB per file. This -means that larger files are likely to fail. - -Unfortunately there is no way for rclone to see that this failure is -because of file size, so it will retry the operation, as any other -failure. To avoid this problem, use --max-size 50000M option to limit -the maximum size of uploaded files. Note that --max-size does not split -files into segments, it only ignores files over this size. - - -Microsoft OneDrive - -Paths are specified as remote:path - -Paths may be as deep as required, eg remote:directory/subdirectory. - -The initial setup for OneDrive involves getting a token from Microsoft -which you need to do in your browser. rclone config walks you through -it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found - make a new one - n) New remote - s) Set configuration password - n/s> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 10 - Microsoft App Client Id - leave blank normally. - client_id> - Microsoft App Client Secret - leave blank normally. - client_secret> - Remote config - Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - client_id = - client_secret = - token = {"access_token":"XXXXXX"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Microsoft. This only runs from the moment it -opens your browser to the moment you get back the verification code. -This is on http://127.0.0.1:53682/ and this it may require you to -unblock it temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -List directories in top level of your OneDrive - - rclone lsd remote: - -List all the files in your OneDrive - - rclone ls remote: - -To copy a local directory to an OneDrive directory called backup - - rclone copy /home/source remote:backup - -Modified time and hashes - -OneDrive allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. - -One drive supports SHA1 type hashes, so you can use --checksum flag. - -Deleting files - -Any files you delete with rclone will end up in the trash. Microsoft -doesn't provide an API to permanently delete files, nor to empty the -trash, so you will have to do that with one of Microsoft's apps or via -the OneDrive website. - -Specific options - -Here are the command line options specific to this cloud storage system. - ---onedrive-chunk-size=SIZE - -Above this size files will be chunked - must be multiple of 320k. The -default is 10MB. Note that the chunks will be buffered into memory. - ---onedrive-upload-cutoff=SIZE - -Cutoff for switching to chunked upload - must be <= 100MB. The default -is 10MB. - -Limitations - -Note that OneDrive is case insensitive so you can't have a file called -"Hello.doc" and one called "hello.doc". - -Rclone only supports your default OneDrive, and doesn't work with One -Drive for business. Both these issues may be fixed at some point -depending on user demand! - -There are quite a few characters that can't be in OneDrive file names. -These can't occur on Windows platforms, but on non-Windows platforms -they are common. Rclone will map these names to and from an identical -looking unicode equivalent. For example if a file has a ? in it will be -mapped to ? instead. - -The largest allowed file size is 10GiB (10,737,418,240 bytes). - - -Hubic - -Paths are specified as remote:path - -Paths are specified as remote:container (or remote: for the lsd -command.) You may put subdirectories in too, eg -remote:container/path/to/dir. - -The initial setup for Hubic involves getting a token from Hubic which -you need to do in your browser. rclone config walks you through it. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - n) New remote - s) Set configuration password - n/s> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 8 - Hubic Client Id - leave blank normally. - client_id> - Hubic Client Secret - leave blank normally. - client_secret> - Remote config - Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - client_id = - client_secret = - token = {"access_token":"XXXXXX"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Hubic. This only runs from the moment it opens -your browser to the moment you get back the verification code. This is -on http://127.0.0.1:53682/ and this it may require you to unblock it -temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -List containers in the top level of your Hubic - - rclone lsd remote: - -List all the files in your Hubic - - rclone ls remote: - -To copy a local directory to an Hubic directory called backup - - rclone copy /home/source remote:backup - -If you want the directory to be visible in the official _Hubic browser_, -you need to copy your files to the default directory - - rclone copy /home/source remote:default/backup - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - -Modified time - -The modified time is stored as metadata on the object as -X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. - -This is a defacto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. - -Note that Hubic wraps the Swift backend, so most of the properties of -are the same. - -Limitations - -This uses the normal OpenStack Swift mechanism to refresh the Swift API -credentials and ignores the expires field returned by the Hubic API. - -The Swift API doesn't return a correct MD5SUM for segmented files -(Dynamic or Static Large Objects) so rclone won't check or use the -MD5SUM for these. +This will leave the config file looking like this. + + [wasabi] + env_auth = false + access_key_id = YOURACCESSKEY + secret_access_key = YOURSECRETACCESSKEY + region = us-east-1 + endpoint = s3.wasabisys.com + location_constraint = + acl = + server_side_encryption = + storage_class = Backblaze B2 @@ -4650,10 +3870,13 @@ in use at any moment, so this sets the upper limit on the memory used. Versions When rclone uploads a new version of a file it creates a new version of -it. Likewise when you delete a file, the old version will still be -available. +it. Likewise when you delete a file, the old version will be marked +hidden and still be available. Conversely, you may opt in to a "hard +delete" of files with the --b2-hard-delete flag which would permanently +remove the file instead of hiding it. -Old versions of files are visible using the --b2-versions flag. +Old versions of files, where available, are visible using the +--b2-versions flag. If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old @@ -4798,406 +4021,14 @@ Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them. -Yandex Disk +Box -Yandex Disk is a cloud storage solution created by Yandex. +Paths are specified as remote:path -Yandex paths may be as deep as required, eg -remote:directory/subdirectory. +Paths may be as deep as required, eg remote:directory/subdirectory. -Here is an example of making a yandex configuration. First run - - rclone config - -This will guide you through an interactive setup process: - - No remotes found - make a new one - n) New remote - s) Set configuration password - n/s> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 13 - Yandex Client Id - leave blank normally. - client_id> - Yandex Client Secret - leave blank normally. - client_secret> - Remote config - Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - client_id = - client_secret = - token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Yandex Disk. This only runs from the moment it -opens your browser to the moment you get back the verification code. -This is on http://127.0.0.1:53682/ and this it may require you to -unblock it temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -See top level directories - - rclone lsd remote: - -Make a new directory - - rclone mkdir remote:directory - -List the contents of a directory - - rclone ls remote:directory - -Sync /home/local/directory to the remote path, deleting any excess files -in the path. - - rclone sync /home/local/directory remote:directory - ---fast-list - -This remote supports --fast-list which allows you to use fewer -transactions in exchange for more memory. See the rclone docs for more -details. - -Modified time - -Modified times are supported and are stored accurate to 1 ns in custom -metadata called rclone_modified in RFC3339 with nanoseconds format. - -MD5 checksums - -MD5 checksums are natively supported by Yandex Disk. - - -SFTP - -SFTP is the Secure (or SSH) File Transfer Protocol. - -It runs over SSH v2 and is standard with most modern SSH installations. - -Paths are specified as remote:path. If the path does not begin with a / -it is relative to the home directory of the user. An empty path remote: -refers to the users home directory. - -Here is an example of making a SFTP configuration. First run - - rclone config - -This will guide you through an interactive setup process. - - No remotes found - make a new one - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" - 10 / Local Disk - \ "local" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / SSH/SFTP Connection - \ "sftp" - 14 / Yandex Disk - \ "yandex" - 15 / http Connection - \ "http" - Storage> sftp - SSH host to connect to - Choose a number from below, or type in your own value - 1 / Connect to example.com - \ "example.com" - host> example.com - SSH username, leave blank for current username, ncw - user> sftpuser - SSH port, leave blank to use default (22) - port> - SSH password, leave blank to use ssh-agent. - y) Yes type in my own password - g) Generate random password - n) No leave this optional password blank - y/g/n> n - Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - key_file> - Remote config - -------------------- - [remote] - host = example.com - user = sftpuser - port = - pass = - key_file = - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -This remote is called remote and can now be used like this - -See all directories in the home directory - - rclone lsd remote: - -Make a new directory - - rclone mkdir remote:path/to/directory - -List the contents of a directory - - rclone ls remote:path/to/directory - -Sync /home/local/directory to the remote directory, deleting any excess -files in the directory. - - rclone sync /home/local/directory remote:directory - -SSH Authentication - -The SFTP remote supports 3 authentication methods - -- Password -- Key file -- ssh-agent - -Key files should be unencrypted PEM-encoded private key files. For -instance /home/$USER/.ssh/id_rsa. - -If you don't specify pass or key_file then it will attempt to contact an -ssh-agent. - -ssh-agent on macOS - -Note that there seem to be various problems with using an ssh-agent on -macOS due to recent changes in the OS. The most effective work-around -seems to be to start an ssh-agent in each session, eg - - eval `ssh-agent -s` && ssh-add -A - -And then at the end of the session - - eval `ssh-agent -k` - -These commands can be used in scripts of course. - -Modified time - -Modified times are stored on the server to 1 second precision. - -Modified times are used in syncing and are fully supported. - -Limitations - -SFTP does not support any checksums. - -The only ssh agent supported under Windows is Putty's pagent. - -SFTP isn't supported under plan9 until this issue is fixed. - -Note that since SFTP isn't HTTP based the following flags don't work -with it: --dump-headers, --dump-bodies, --dump-auth - -Note that --timeout isn't supported (but --contimeout is). - - -FTP - -FTP is the File Transfer Protocol. FTP support is provided using the -github.com/jlaffaye/ftp package. - -Here is an example of making an FTP configuration. First run - - rclone config - -This will guide you through an interactive setup process. An FTP remote -only needs a host together with and a username and a password. With -anonymous FTP server, you will need to use anonymous as username and -your email address as the password. - - No remotes found - make a new one - n) New remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - n/r/c/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" - 10 / Local Disk - \ "local" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / SSH/SFTP Connection - \ "sftp" - 14 / Yandex Disk - \ "yandex" - Storage> ftp - FTP host to connect to - Choose a number from below, or type in your own value - 1 / Connect to ftp.example.com - \ "ftp.example.com" - host> ftp.example.com - FTP username, leave blank for current username, ncw - user> - FTP port, leave blank to use default (21) - port> - FTP password - y) Yes type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: - Remote config - -------------------- - [remote] - host = ftp.example.com - user = - port = - pass = *** ENCRYPTED *** - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -This remote is called remote and can now be used like this - -See all directories in the home directory - - rclone lsd remote: - -Make a new directory - - rclone mkdir remote:path/to/directory - -List the contents of a directory - - rclone ls remote:path/to/directory - -Sync /home/local/directory to the remote directory, deleting any excess -files in the directory. - - rclone sync /home/local/directory remote:directory - -Modified time - -FTP does not support modified times. Any times you see on the server -will be time of upload. - -Checksums - -FTP does not support any checksums. - -Limitations - -Note that since FTP isn't HTTP based the following flags don't work with -it: --dump-headers, --dump-bodies, --dump-auth - -Note that --timeout isn't supported (but --contimeout is). - -FTP could support server side move but doesn't yet. - - -HTTP - -The HTTP remote is a read only remote for reading files of a webserver. -The webserver should provide file listings which rclone will read and -turn into a remote. This has been tested with common webservers such as -Apache/Nginx/Caddy and will likely work with file listings from most web -servers. (If it doesn't then please file an issue, or send a pull -request!) - -Paths are specified as remote: or remote:path/to/dir. +The initial setup for Box involves getting a token from Box which you +need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: @@ -5219,50 +4050,111 @@ This will guide you through an interactive setup process: \ "s3" 3 / Backblaze B2 \ "b2" - 4 / Dropbox + 4 / Box + \ "box" + 5 / Dropbox \ "dropbox" - 5 / Encrypt/Decrypt a remote + 6 / Encrypt/Decrypt a remote \ "crypt" - 6 / FTP Connection + 7 / FTP Connection \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) + 8 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 8 / Google Drive + 9 / Google Drive \ "drive" - 9 / Hubic + 10 / Hubic \ "hubic" - 10 / Local Disk + 11 / Local Disk \ "local" - 11 / Microsoft OneDrive + 12 / Microsoft OneDrive \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + 13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" - 13 / SSH/SFTP Connection + 14 / SSH/SFTP Connection \ "sftp" - 14 / Yandex Disk + 15 / Yandex Disk \ "yandex" - 15 / http Connection + 16 / http Connection \ "http" - Storage> http - URL of http host to connect to - Choose a number from below, or type in your own value - 1 / Connect to example.com - \ "https://example.com" - url> https://beta.rclone.org + Storage> box + Box App Client Id - leave blank normally. + client_id> + Box App Client Secret - leave blank normally. + client_secret> Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code -------------------- [remote] - url = https://beta.rclone.org + client_id = + client_secret = + token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y + +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Box. This only runs from the moment it opens your +browser to the moment you get back the verification code. This is on +http://127.0.0.1:53682/ and this it may require you to unblock it +temporarily if you are running a host firewall. + +Once configured you can then use rclone like this, + +List directories in top level of your Box + + rclone lsd remote: + +List all the files in your Box + + rclone ls remote: + +To copy a local directory to an Box directory called backup + + rclone copy /home/source remote:backup + +Invalid refresh token + +According to the box docs: + + Each refresh_token is valid for one use in 60 days. + +This means that if you + +- Don't use the box remote for 60 days +- Copy the config file with a box refresh token in and use it in two + places +- Get an error on a token refresh + +then rclone will return an error which includes the text +Invalid refresh token. + +To fix this you will need to use oauth2 again to update the refresh +token. You can use the methods in the remote setup docs, bearing in mind +that if you use the copy the config file method, you should not use that +remote on the computer you did the authentication on. + +Here is how to do it. + + $ rclone config Current remotes: Name Type ==== ==== - remote http + remote box e) Edit existing remote n) New remote @@ -5271,47 +4163,88 @@ This will guide you through an interactive setup process: c) Copy remote s) Set configuration password q) Quit config - e/n/d/r/c/s/q> q + e/n/d/r/c/s/q> e + Choose a number from below, or type in an existing value + 1 > remote + remote> remote + -------------------- + [remote] + type = box + token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"} + -------------------- + Edit remote + Value "client_id" = "" + Edit? (y/n)> + y) Yes + n) No + y/n> n + Value "client_secret" = "" + Edit? (y/n)> + y) Yes + n) No + y/n> n + Remote config + Already have a token - refresh? + y) Yes + n) No + y/n> y + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + type = box + token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y -This remote is called remote and can now be used like this +Modified time and hashes -See all the top level directories +Box allows modification times to be set on objects accurate to 1 second. +These will be used to detect whether objects need syncing or not. - rclone lsd remote: +One drive supports SHA1 type hashes, so you can use the --checksum flag. -List the contents of a directory +Transfers - rclone ls remote:directory +For files above 50MB rclone will use a chunked transfer. Rclone will +upload up to --transfers chunks at the same time (shared among all the +multipart uploads). Chunks are buffered in memory and are normally 8MB +so increasing --transfers will increase memory use. -Sync the remote directory to /home/local/directory, deleting any excess -files. +Deleting files - rclone sync remote:directory /home/local/directory +Depending on the enterprise settings for your user, the item will either +be actually deleted from Box or moved to the trash. -Read only +Specific options -This remote is read only - you can't upload files to an HTTP server. +Here are the command line options specific to this cloud storage system. -Modified time +--box-upload-cutoff=SIZE -Most HTTP servers store time accurate to 1 second. +Cutoff for switching to chunked upload - must be >= 50MB. The default is +50MB. -Checksum +Limitations -No checksums are stored. +Note that Box is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". -Usage without a config file +Box file names can't have the \ character in. rclone maps this to and +from an identical looking unicode equivalent \. -Note that since only two environment variable need to be set, it is easy -to use without a config file like this. - - RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz: - -Or if you prefer - - export RCLONE_CONFIG_ZZ_TYPE=http - export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org - rclone lsd zz: +Box only supports filenames up to 255 characters in length. Crypt @@ -5709,6 +4642,2128 @@ encrypted data. For full protection agains this you should always use a salt. +Dropbox + +Paths are specified as remote:path + +Dropbox paths may be as deep as required, eg +remote:directory/subdirectory. + +The initial setup for dropbox involves getting a token from Dropbox +which you need to do in your browser. rclone config walks you through +it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + n) New remote + d) Delete remote + q) Quit config + e/n/d/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" + 10 / Microsoft OneDrive + \ "onedrive" + 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 12 / SSH/SFTP Connection + \ "sftp" + 13 / Yandex Disk + \ "yandex" + Storage> 4 + Dropbox App Key - leave blank normally. + app_key> + Dropbox App Secret - leave blank normally. + app_secret> + Remote config + Please visit: + https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code + Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX + -------------------- + [remote] + app_key = + app_secret = + token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +You can then use it like this, + +List directories in top level of your dropbox + + rclone lsd remote: + +List all the files in your dropbox + + rclone ls remote: + +To copy a local directory to a dropbox directory called backup + + rclone copy /home/source remote:backup + +Modified time and Hashes + +Dropbox supports modified times, but the only way to set a modification +time is to re-upload the file. + +This means that if you uploaded your data with an older version of +rclone which didn't support the v2 API and modified times, rclone will +decide to upload all your old data to fix the modification times. If you +don't want this to happen use --size-only or --checksum flag to stop it. + +Dropbox supports its own hash type which is checked for all transfers. + +Specific options + +Here are the command line options specific to this cloud storage system. + +--dropbox-chunk-size=SIZE + +Upload chunk size. Max 150M. The default is 128MB. Note that this isn't +buffered into memory. + +Limitations + +Note that Dropbox is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". + +There are some file names such as thumbs.db which Dropbox can't store. +There is a full list of them in the "Ignored Files" section of this +document. Rclone will issue an error message +File name disallowed - not uploading if it attempt to upload one of +those file names, but the sync won't fail. + +If you have more than 10,000 files in a directory then +rclone purge dropbox:dir will return the error +Failed to purge: There are too many files involved in this operation. As +a work-around do an rclone delete dropbox:dir followed by an +rclone rmdir dropbox:dir. + + +FTP + +FTP is the File Transfer Protocol. FTP support is provided using the +github.com/jlaffaye/ftp package. + +Here is an example of making an FTP configuration. First run + + rclone config + +This will guide you through an interactive setup process. An FTP remote +only needs a host together with and a username and a password. With +anonymous FTP server, you will need to use anonymous as username and +your email address as the password. + + No remotes found - make a new one + n) New remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + n/r/c/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" + 10 / Local Disk + \ "local" + 11 / Microsoft OneDrive + \ "onedrive" + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 13 / SSH/SFTP Connection + \ "sftp" + 14 / Yandex Disk + \ "yandex" + Storage> ftp + FTP host to connect to + Choose a number from below, or type in your own value + 1 / Connect to ftp.example.com + \ "ftp.example.com" + host> ftp.example.com + FTP username, leave blank for current username, ncw + user> + FTP port, leave blank to use default (21) + port> + FTP password + y) Yes type in my own password + g) Generate random password + y/g> y + Enter the password: + password: + Confirm the password: + password: + Remote config + -------------------- + [remote] + host = ftp.example.com + user = + port = + pass = *** ENCRYPTED *** + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called remote and can now be used like this + +See all directories in the home directory + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:path/to/directory + +List the contents of a directory + + rclone ls remote:path/to/directory + +Sync /home/local/directory to the remote directory, deleting any excess +files in the directory. + + rclone sync /home/local/directory remote:directory + +Modified time + +FTP does not support modified times. Any times you see on the server +will be time of upload. + +Checksums + +FTP does not support any checksums. + +Limitations + +Note that since FTP isn't HTTP based the following flags don't work with +it: --dump-headers, --dump-bodies, --dump-auth + +Note that --timeout isn't supported (but --contimeout is). + +Note that --bind isn't supported. + +FTP could support server side move but doesn't yet. + + +Google Cloud Storage + +Paths are specified as remote:bucket (or remote: for the lsd command.) +You may put subdirectories in too, eg remote:bucket/path/to/dir. + +The initial setup for google cloud storage involves getting a token from +Google Cloud Storage which you need to do in your browser. rclone config +walks you through it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + n) New remote + d) Delete remote + q) Quit config + e/n/d/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" + 10 / Microsoft OneDrive + \ "onedrive" + 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 12 / SSH/SFTP Connection + \ "sftp" + 13 / Yandex Disk + \ "yandex" + Storage> 6 + Google Application Client Id - leave blank normally. + client_id> + Google Application Client Secret - leave blank normally. + client_secret> + Project number optional - needed only for list/create/delete buckets - see your developer console. + project_number> 12345678 + Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. + service_account_file> + Access Control List for new objects. + Choose a number from below, or type in your own value + 1 / Object owner gets OWNER access, and all Authenticated Users get READER access. + \ "authenticatedRead" + 2 / Object owner gets OWNER access, and project team owners get OWNER access. + \ "bucketOwnerFullControl" + 3 / Object owner gets OWNER access, and project team owners get READER access. + \ "bucketOwnerRead" + 4 / Object owner gets OWNER access [default if left blank]. + \ "private" + 5 / Object owner gets OWNER access, and project team members get access according to their roles. + \ "projectPrivate" + 6 / Object owner gets OWNER access, and all Users get READER access. + \ "publicRead" + object_acl> 4 + Access Control List for new buckets. + Choose a number from below, or type in your own value + 1 / Project team owners get OWNER access, and all Authenticated Users get READER access. + \ "authenticatedRead" + 2 / Project team owners get OWNER access [default if left blank]. + \ "private" + 3 / Project team members get access according to their roles. + \ "projectPrivate" + 4 / Project team owners get OWNER access, and all Users get READER access. + \ "publicRead" + 5 / Project team owners get OWNER access, and all Users get WRITER access. + \ "publicReadWrite" + bucket_acl> 2 + Location for the newly created buckets. + Choose a number from below, or type in your own value + 1 / Empty for default location (US). + \ "" + 2 / Multi-regional location for Asia. + \ "asia" + 3 / Multi-regional location for Europe. + \ "eu" + 4 / Multi-regional location for United States. + \ "us" + 5 / Taiwan. + \ "asia-east1" + 6 / Tokyo. + \ "asia-northeast1" + 7 / Singapore. + \ "asia-southeast1" + 8 / Sydney. + \ "australia-southeast1" + 9 / Belgium. + \ "europe-west1" + 10 / London. + \ "europe-west2" + 11 / Iowa. + \ "us-central1" + 12 / South Carolina. + \ "us-east1" + 13 / Northern Virginia. + \ "us-east4" + 14 / Oregon. + \ "us-west1" + location> 12 + The storage class to use when storing objects in Google Cloud Storage. + Choose a number from below, or type in your own value + 1 / Default + \ "" + 2 / Multi-regional storage class + \ "MULTI_REGIONAL" + 3 / Regional storage class + \ "REGIONAL" + 4 / Nearline storage class + \ "NEARLINE" + 5 / Coldline storage class + \ "COLDLINE" + 6 / Durable reduced availability storage class + \ "DURABLE_REDUCED_AVAILABILITY" + storage_class> 5 + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine or Y didn't work + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + type = google cloud storage + client_id = + client_secret = + token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null} + project_number = 12345678 + object_acl = private + bucket_acl = private + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on http://127.0.0.1:53682/ and this it +may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +This remote is called remote and can now be used like this + +See all the buckets in your project + + rclone lsd remote: + +Make a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync /home/local/directory to the remote bucket, deleting any excess +files in the bucket. + + rclone sync /home/local/directory remote:bucket + +Service Account support + +You can set up rclone with Google Cloud Storage in an unattended mode, +i.e. not tied to a specific end-user Google account. This is useful when +you want to synchronise files onto machines that don't have actively +logged-in users, for example build machines. + +To get credentials for Google Cloud Platform IAM Service Accounts, +please head to the Service Account section of the Google Developer +Console. Service Accounts behave just like normal User permissions in +Google Cloud Storage ACLs, so you can limit their access (e.g. make them +read only). After creating an account, a JSON file containing the +Service Account's credentials will be downloaded onto your machines. +These credentials are what rclone will use for authentication. + +To use a Service Account instead of OAuth2 token flow, enter the path to +your Service Account credentials at the service_account_file prompt and +rclone won't use the browser based authentication flow. + +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + +Modified time + +Google google cloud storage stores md5sums natively and rclone stores +modification times as metadata on the object, under the "mtime" key in +RFC3339 format accurate to 1ns. + + +Google Drive + +Paths are specified as drive:path + +Drive paths may be as deep as required, eg drive:directory/subdirectory. + +The initial setup for drive involves getting a token from Google drive +which you need to do in your browser. rclone config walks you through +it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + n/r/c/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" + 10 / Local Disk + \ "local" + 11 / Microsoft OneDrive + \ "onedrive" + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 13 / SSH/SFTP Connection + \ "sftp" + 14 / Yandex Disk + \ "yandex" + Storage> 8 + Google Application Client Id - leave blank normally. + client_id> + Google Application Client Secret - leave blank normally. + client_secret> + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine or Y didn't work + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + Configure this as a team drive? + y) Yes + n) No + y/n> n + -------------------- + [remote] + client_id = + client_secret = + token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on http://127.0.0.1:53682/ and this it +may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +You can then use it like this, + +List directories in top level of your drive + + rclone lsd remote: + +List all the files in your drive + + rclone ls remote: + +To copy a local directory to a drive directory called backup + + rclone copy /home/source remote:backup + +Team drives + +If you want to configure the remote to point to a Google Team Drive then +answer y to the question Configure this as a team drive?. + +This will fetch the list of Team Drives from google and allow you to +configure which one you want to use. You can also type in a team drive +ID if you prefer. + +For example: + + Configure this as a team drive? + y) Yes + n) No + y/n> y + Fetching team drive list... + Choose a number from below, or type in your own value + 1 / Rclone Test + \ "xxxxxxxxxxxxxxxxxxxx" + 2 / Rclone Test 2 + \ "yyyyyyyyyyyyyyyyyyyy" + 3 / Rclone Test 3 + \ "zzzzzzzzzzzzzzzzzzzz" + Enter a Team Drive ID> 1 + -------------------- + [remote] + client_id = + client_secret = + token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null} + team_drive = xxxxxxxxxxxxxxxxxxxx + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Modified time + +Google drive stores modification times accurate to 1 ms. + +Revisions + +Google drive stores revisions of files. When you upload a change to an +existing file to google drive using rclone it will create a new revision +of that file. + +Revisions follow the standard google policy which at time of writing was + +- They are deleted after 30 days or 100 revisions (whatever + comes first). +- They do not count towards a user storage quota. + +Deleting files + +By default rclone will send all files to the trash when deleting files. +If deleting them permanently is required then use the +--drive-use-trash=false flag, or set the equivalent environment +variable. + +Emptying trash + +If you wish to empty your trash you can use the rclone cleanup remote: +command which will permanently delete all your trashed files. This +command does not take any path arguments. + +Specific options + +Here are the command line options specific to this cloud storage system. + +--drive-auth-owner-only + +Only consider files owned by the authenticated user. + +--drive-chunk-size=SIZE + +Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB. + +Making this larger will improve performance, but note that each chunk is +buffered in memory one per transfer. + +Reducing this will reduce memory usage but decrease performance. + +--drive-formats + +Google documents can only be exported from Google drive. When rclone +downloads a Google doc it chooses a format to download depending upon +this setting. + +By default the formats are docx,xlsx,pptx,svg which are a sensible +default for an editable document. + +When choosing a format, rclone runs down the list provided in order and +chooses the first file format the doc can be exported as from the list. +If the file can't be exported to a format on the formats list, then +rclone will choose a format from the default list. + +If you prefer an archive copy then you might use --drive-formats pdf, or +if you prefer openoffice/libreoffice formats you might use +--drive-formats ods,odt,odp. + +Note that rclone adds the extension to the google doc, so if it is +calles My Spreadsheet on google docs, it will be exported as +My Spreadsheet.xlsx or My Spreadsheet.pdf etc. + +Here are the possible extensions with their corresponding mime types. + + ------------------------------------- + Extension Mime Type Description + ---------- ------------ ------------- + csv text/csv Standard CSV + format for + Spreadsheets + + doc application/ Micosoft + msword Office + Document + + docx application/ Microsoft + vnd.openxmlf Office + ormats-offic Document + edocument.wo + rdprocessing + ml.document + + epub application/ E-book format + epub+zip + + html text/html An HTML + Document + + jpg image/jpeg A JPEG Image + File + + odp application/ Openoffice + vnd.oasis.op Presentation + endocument.p + resentation + + ods application/ Openoffice + vnd.oasis.op Spreadsheet + endocument.s + preadsheet + + ods application/ Openoffice + x-vnd.oasis. Spreadsheet + opendocument + .spreadsheet + + odt application/ Openoffice + vnd.oasis.op Document + endocument.t + ext + + pdf application/ Adobe PDF + pdf Format + + png image/png PNG Image + Format + + pptx application/ Microsoft + vnd.openxmlf Office + ormats-offic Powerpoint + edocument.pr + esentationml + .presentatio + n + + rtf application/ Rich Text + rtf Format + + svg image/svg+xm Scalable + l Vector + Graphics + Format + + tsv text/tab-sep Standard TSV + arated-value format for + s spreadsheets + + txt text/plain Plain Text + + xls application/ Microsoft + vnd.ms-excel Office + Spreadsheet + + xlsx application/ Microsoft + vnd.openxmlf Office + ormats-offic Spreadsheet + edocument.sp + readsheetml. + sheet + + zip application/ A ZIP file of + zip HTML, Images + CSS + ------------------------------------- + +--drive-list-chunk int + +Size of listing chunk 100-1000. 0 to disable. (default 1000) + +--drive-shared-with-me + +Only show files that are shared with me + +--drive-skip-gdocs + +Skip google documents in all listings. If given, gdocs practically +become invisible to rclone. + +--drive-trashed-only + +Only show files that are in the trash. This will show trashed files in +their original directory structure. + +--drive-upload-cutoff=SIZE + +File size cutoff for switching to chunked upload. Default is 8 MB. + +--drive-use-trash + +Controls whether files are sent to the trash or deleted permanently. +Defaults to true, namely sending files to the trash. Use +--drive-use-trash=false to delete files permanently instead. + +Limitations + +Drive has quite a lot of rate limiting. This causes rclone to be limited +to transferring about 2 files per second only. Individual files may be +transferred much faster at 100s of MBytes/s but lots of small files can +take a long time. + +Server side copies are also subject to a separate rate limit. If you see +User rate limit exceeded errors, wait at least 24 hours and retry. You +can disable server side copies with --disable copy to download and +upload the files if you prefer. + +Duplicated files + +Sometimes, for no reason I've been able to track down, drive will +duplicate a file that rclone uploads. Drive unlike all the other remotes +can have duplicated files. + +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. + +Use rclone dedupe to fix duplicated files. + +Note that this isn't just a problem with rclone, even Google Photos on +Android duplicates files on drive sometimes. + +Rclone appears to be re-copying files it shouldn't + +There are two possible reasons for rclone to recopy files which haven't +changed to Google Drive. + +The first is the duplicated file issue above - run rclone dedupe and +check your logs for duplicate object or directory messages. + +The second is that sometimes Google reports different sizes for the +Google Docs exports which will cause rclone to re-download Google Docs +for no apparent reason. --ignore-size is a not very satisfactory +work-around for this if it is causing you a lot of problems. + +Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y" + +This is the same problem as above. Google reports the google doc is one +size, but rclone downloads a different size. Work-around with the +--ignore-size flag or wait for rclone to retry the download which it +will. + +Making your own client_id + +When you use rclone with Google drive in its default configuration you +are using rclone's client_id. This is shared between all the rclone +users. There is a global rate limit on the number of queries per second +that each client_id can do set by Google. rclone already has a high +quota and I will continue to make sure it is high enough by contacting +Google. + +However you might find you get better performance making your own +client_id if you are a heavy user. Or you may not depending on exactly +how Google have been raising rclone's rate limit. + +Here is how to create your own Google Drive client ID for rclone: + +1. Log into the Google API Console with your Google account. It doesn't + matter what Google account you use. (It need not be the same account + as the Google Drive you want to access) + +2. Select a project or create a new project. + +3. Under Overview, Google APIs, Google Apps APIs, click "Drive API", + then "Enable". + +4. Click "Credentials" in the left-side panel (not "Go to credentials", + which opens the wizard), then "Create credentials", then "OAuth + client ID". It will prompt you to set the OAuth consent screen + product name, if you haven't set one already. + +5. Choose an application type of "other", and click "Create". (the + default name is fine) + +6. It will show you a client ID and client secret. Use these values in + rclone config to add a new remote or edit an existing remote. + +(Thanks to @balazer on github for these instructions.) + + +HTTP + +The HTTP remote is a read only remote for reading files of a webserver. +The webserver should provide file listings which rclone will read and +turn into a remote. This has been tested with common webservers such as +Apache/Nginx/Caddy and will likely work with file listings from most web +servers. (If it doesn't then please file an issue, or send a pull +request!) + +Paths are specified as remote: or remote:path/to/dir. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" + 10 / Local Disk + \ "local" + 11 / Microsoft OneDrive + \ "onedrive" + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 13 / SSH/SFTP Connection + \ "sftp" + 14 / Yandex Disk + \ "yandex" + 15 / http Connection + \ "http" + Storage> http + URL of http host to connect to + Choose a number from below, or type in your own value + 1 / Connect to example.com + \ "https://example.com" + url> https://beta.rclone.org + Remote config + -------------------- + [remote] + url = https://beta.rclone.org + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + remote http + + e) Edit existing remote + n) New remote + d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + e/n/d/r/c/s/q> q + +This remote is called remote and can now be used like this + +See all the top level directories + + rclone lsd remote: + +List the contents of a directory + + rclone ls remote:directory + +Sync the remote directory to /home/local/directory, deleting any excess +files. + + rclone sync remote:directory /home/local/directory + +Read only + +This remote is read only - you can't upload files to an HTTP server. + +Modified time + +Most HTTP servers store time accurate to 1 second. + +Checksum + +No checksums are stored. + +Usage without a config file + +Note that since only two environment variable need to be set, it is easy +to use without a config file like this. + + RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz: + +Or if you prefer + + export RCLONE_CONFIG_ZZ_TYPE=http + export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org + rclone lsd zz: + + +Hubic + +Paths are specified as remote:path + +Paths are specified as remote:container (or remote: for the lsd +command.) You may put subdirectories in too, eg +remote:container/path/to/dir. + +The initial setup for Hubic involves getting a token from Hubic which +you need to do in your browser. rclone config walks you through it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + n) New remote + s) Set configuration password + n/s> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" + 10 / Microsoft OneDrive + \ "onedrive" + 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 12 / SSH/SFTP Connection + \ "sftp" + 13 / Yandex Disk + \ "yandex" + Storage> 8 + Hubic Client Id - leave blank normally. + client_id> + Hubic Client Secret - leave blank normally. + client_secret> + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + client_id = + client_secret = + token = {"access_token":"XXXXXX"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Hubic. This only runs from the moment it opens +your browser to the moment you get back the verification code. This is +on http://127.0.0.1:53682/ and this it may require you to unblock it +temporarily if you are running a host firewall. + +Once configured you can then use rclone like this, + +List containers in the top level of your Hubic + + rclone lsd remote: + +List all the files in your Hubic + + rclone ls remote: + +To copy a local directory to an Hubic directory called backup + + rclone copy /home/source remote:backup + +If you want the directory to be visible in the official _Hubic browser_, +you need to copy your files to the default directory + + rclone copy /home/source remote:default/backup + +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + +Modified time + +The modified time is stored as metadata on the object as +X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. + +This is a defacto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. + +Note that Hubic wraps the Swift backend, so most of the properties of +are the same. + +Limitations + +This uses the normal OpenStack Swift mechanism to refresh the Swift API +credentials and ignores the expires field returned by the Hubic API. + +The Swift API doesn't return a correct MD5SUM for segmented files +(Dynamic or Static Large Objects) so rclone won't check or use the +MD5SUM for these. + + +Microsoft Azure Blob Storage + +Paths are specified as remote:container (or remote: for the lsd +command.) You may put subdirectories in too, eg +remote:container/path/to/dir. + +Here is an example of making a Microsoft Azure Blob Storage +configuration. For a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Box + \ "box" + 5 / Dropbox + \ "dropbox" + 6 / Encrypt/Decrypt a remote + \ "crypt" + 7 / FTP Connection + \ "ftp" + 8 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 9 / Google Drive + \ "drive" + 10 / Hubic + \ "hubic" + 11 / Local Disk + \ "local" + 12 / Microsoft Azure Blob Storage + \ "azureblob" + 13 / Microsoft OneDrive + \ "onedrive" + 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 15 / SSH/SFTP Connection + \ "sftp" + 16 / Yandex Disk + \ "yandex" + 17 / http Connection + \ "http" + Storage> azureblob + Storage Account Name + account> account_name + Storage Account Key + key> base64encodedkey== + Endpoint for the service - leave blank normally. + endpoint> + Remote config + -------------------- + [remote] + account = account_name + key = base64encodedkey== + endpoint = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +See all containers + + rclone lsd remote: + +Make a new container + + rclone mkdir remote:container + +List the contents of a container + + rclone ls remote:container + +Sync /home/local/directory to the remote container, deleting any excess +files in the container. + + rclone sync /home/local/directory remote:container + +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + +Modified time + +The modified time is stored as metadata on the object with the mtime +key. It is stored using RFC3339 Format time with nanosecond precision. +The metadata is supplied during directory listings so there is no +overhead to using it. + +Hashes + +MD5 hashes are stored with blobs. However blobs that were uploaded in +chunks only have an MD5 if the source remote was capable of MD5 hashes, +eg the local disk. + +Multipart uploads + +Rclone supports multipart uploads with Azure Blob storage. Files bigger +than 256MB will be uploaded using chunked upload by default. + +The files will be uploaded in parallel in 4MB chunks (by default). Note +that these chunks are buffered in memory and there may be up to +--transfers of them being uploaded at once. + +Files can't be split into more than 50,000 chunks so by default, so the +largest file that can be uploaded with 4MB chunk size is 195GB. Above +this rclone will double the chunk size until it creates less than 50,000 +chunks. By default this will mean a maximum file size of 3.2TB can be +uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M. + +Note that rclone doesn't commit the block list until the end of the +upload which means that there is a limit of 9.5TB of multipart uploads +in progress as Azure won't allow more than that amount of uncommitted +blocks. + +Specific options + +Here are the command line options specific to this cloud storage system. + +--azureblob-upload-cutoff=SIZE + +Cutoff for switching to chunked upload - must be <= 256MB. The default +is 256MB. + +--azureblob-chunk-size=SIZE + +Upload chunk size. Default 4MB. Note that this is stored in memory and +there may be up to --transfers chunks stored at once in memory. This can +be at most 100MB. + +Limitations + +MD5 sums are only uploaded with chunked files if the source has an MD5 +sum. This will always be the case for a local to azure copy. + + +Microsoft OneDrive + +Paths are specified as remote:path + +Paths may be as deep as required, eg remote:directory/subdirectory. + +The initial setup for OneDrive involves getting a token from Microsoft +which you need to do in your browser. rclone config walks you through +it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + n/s> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" + 10 / Microsoft OneDrive + \ "onedrive" + 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 12 / SSH/SFTP Connection + \ "sftp" + 13 / Yandex Disk + \ "yandex" + Storage> 10 + Microsoft App Client Id - leave blank normally. + client_id> + Microsoft App Client Secret - leave blank normally. + client_secret> + Remote config + Choose OneDrive account type? + * Say b for a OneDrive business account + * Say p for a personal OneDrive account + b) Business + p) Personal + b/p> p + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + client_id = + client_secret = + token = {"access_token":"XXXXXX"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Microsoft. This only runs from the moment it +opens your browser to the moment you get back the verification code. +This is on http://127.0.0.1:53682/ and this it may require you to +unblock it temporarily if you are running a host firewall. + +Once configured you can then use rclone like this, + +List directories in top level of your OneDrive + + rclone lsd remote: + +List all the files in your OneDrive + + rclone ls remote: + +To copy a local directory to an OneDrive directory called backup + + rclone copy /home/source remote:backup + +OneDrive for Business + +There is additional support for OneDrive for Business. Select "b" when +ask + + Choose OneDrive account type? + * Say b for a OneDrive business account + * Say p for a personal OneDrive account + b) Business + p) Personal + b/p> + +After that rclone requires an authentication of your account. The +application will first authenticate your account, then query the +OneDrive resource URL and do a second (silent) authentication for this +resource URL. + +Modified time and hashes + +OneDrive allows modification times to be set on objects accurate to 1 +second. These will be used to detect whether objects need syncing or +not. + +One drive supports SHA1 type hashes, so you can use --checksum flag. + +Deleting files + +Any files you delete with rclone will end up in the trash. Microsoft +doesn't provide an API to permanently delete files, nor to empty the +trash, so you will have to do that with one of Microsoft's apps or via +the OneDrive website. + +Specific options + +Here are the command line options specific to this cloud storage system. + +--onedrive-chunk-size=SIZE + +Above this size files will be chunked - must be multiple of 320k. The +default is 10MB. Note that the chunks will be buffered into memory. + +--onedrive-upload-cutoff=SIZE + +Cutoff for switching to chunked upload - must be <= 100MB. The default +is 10MB. + +Limitations + +Note that OneDrive is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". + +There are quite a few characters that can't be in OneDrive file names. +These can't occur on Windows platforms, but on non-Windows platforms +they are common. Rclone will map these names to and from an identical +looking unicode equivalent. For example if a file has a ? in it will be +mapped to ? instead. + +The largest allowed file size is 10GiB (10,737,418,240 bytes). + + +QingStor + +Paths are specified as remote:bucket (or remote: for the lsd command.) +You may put subdirectories in too, eg remote:bucket/path/to/dir. + +Here is an example of making an QingStor configuration. First run + + rclone config + +This will guide you through an interactive setup process. + + No remotes found - make a new one + n) New remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + n/r/c/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" + 10 / Local Disk + \ "local" + 11 / Microsoft OneDrive + \ "onedrive" + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 13 / QingStor Object Storage + \ "qingstor" + 14 / SSH/SFTP Connection + \ "sftp" + 15 / Yandex Disk + \ "yandex" + Storage> 13 + Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own value + 1 / Enter QingStor credentials in the next step + \ "false" + 2 / Get QingStor credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + QingStor Access Key ID - leave blank for anonymous access or runtime credentials. + access_key_id> access_key + QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. + secret_access_key> secret_key + Enter a endpoint URL to connection QingStor API. + Leave blank will use the default value "https://qingstor.com:443" + endpoint> + Zone connect to. Default is "pek3a". + Choose a number from below, or type in your own value + / The Beijing (China) Three Zone + 1 | Needs location constraint pek3a. + \ "pek3a" + / The Shanghai (China) First Zone + 2 | Needs location constraint sh1a. + \ "sh1a" + zone> 1 + Number of connnection retry. + Leave blank will use the default value "3". + connection_retries> + Remote config + -------------------- + [remote] + env_auth = false + access_key_id = access_key + secret_access_key = secret_key + endpoint = + zone = pek3a + connection_retries = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called remote and can now be used like this + +See all buckets + + rclone lsd remote: + +Make a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + +Sync /home/local/directory to the remote bucket, deleting any excess +files in the bucket. + + rclone sync /home/local/directory remote:bucket + +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + +Multipart uploads + +rclone supports multipart uploads with QingStor which means that it can +upload files bigger than 5GB. Note that files uploaded with multipart +upload don't have an MD5SUM. + +Buckets and Zone + +With QingStor you can list buckets (rclone lsd) using any zone, but you +can only access the content of a bucket from the zone it was created in. +If you attempt to access a bucket from the wrong zone, you will get an +error, incorrect zone, the bucket is not in 'XXX' zone. + +Authentication + +There are two ways to supply rclone with a set of QingStor credentials. +In order of precedence: + +- Directly in the rclone configuration file (as configured by + rclone config) +- set access_key_id and secret_access_key +- Runtime configuration: +- set env_auth to true in the config file +- Exporting the following environment variables before running rclone + - Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY + - Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY + + +Swift + +Swift refers to Openstack Object Storage. Commercial implementations of +that being: + +- Rackspace Cloud Files +- Memset Memstore +- OVH Object Storage +- Oracle Cloud Storage + +Paths are specified as remote:container (or remote: for the lsd +command.) You may put subdirectories in too, eg +remote:container/path/to/dir. + +Here is an example of making a swift configuration. First run + + rclone config + +This will guide you through an interactive setup process. + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Box + \ "box" + 5 / Dropbox + \ "dropbox" + 6 / Encrypt/Decrypt a remote + \ "crypt" + 7 / FTP Connection + \ "ftp" + 8 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 9 / Google Drive + \ "drive" + 10 / Hubic + \ "hubic" + 11 / Local Disk + \ "local" + 12 / Microsoft Azure Blob Storage + \ "azureblob" + 13 / Microsoft OneDrive + \ "onedrive" + 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 15 / QingClound Object Storage + \ "qingstor" + 16 / SSH/SFTP Connection + \ "sftp" + 17 / Yandex Disk + \ "yandex" + 18 / http Connection + \ "http" + Storage> swift + Get swift credentials from environment variables in standard OpenStack form. + Choose a number from below, or type in your own value + 1 / Enter swift credentials in the next step + \ "false" + 2 / Get swift credentials from environment vars. Leave other fields blank if using this. + \ "true" + env_auth> 1 + User name to log in. + user> user_name + API key or password. + key> password_or_api_key + Authentication URL for server. + Choose a number from below, or type in your own value + 1 / Rackspace US + \ "https://auth.api.rackspacecloud.com/v1.0" + 2 / Rackspace UK + \ "https://lon.auth.api.rackspacecloud.com/v1.0" + 3 / Rackspace v2 + \ "https://identity.api.rackspacecloud.com/v2.0" + 4 / Memset Memstore UK + \ "https://auth.storage.memset.com/v1.0" + 5 / Memset Memstore UK v2 + \ "https://auth.storage.memset.com/v2.0" + 6 / OVH + \ "https://auth.cloud.ovh.net/v2.0" + auth> 1 + User domain - optional (v3 auth) + domain> Default + Tenant name - optional for v1 auth, required otherwise + tenant> tenant_name + Tenant domain - optional (v3 auth) + tenant_domain> + Region name - optional + region> + Storage URL - optional + storage_url> + AuthVersion - optional - set to (1,2,3) if your auth URL has no version + auth_version> + Endpoint type to choose from the service catalogue + Choose a number from below, or type in your own value + 1 / Public (default, choose this if not sure) + \ "public" + 2 / Internal (use internal service net) + \ "internal" + 3 / Admin + \ "admin" + endpoint_type> + Remote config + -------------------- + [remote] + env_auth = false + user = user_name + key = password_or_api_key + auth = https://auth.api.rackspacecloud.com/v1.0 + domain = Default + tenant = + tenant_domain = + region = + storage_url = + auth_version = + endpoint_type = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called remote and can now be used like this + +See all containers + + rclone lsd remote: + +Make a new container + + rclone mkdir remote:container + +List the contents of a container + + rclone ls remote:container + +Sync /home/local/directory to the remote container, deleting any excess +files in the container. + + rclone sync /home/local/directory remote:container + +Configuration from an Openstack credentials file + +An Opentstack credentials file typically looks something something like +this (without the comments) + + export OS_AUTH_URL=https://a.provider.net/v2.0 + export OS_TENANT_ID=ffffffffffffffffffffffffffffffff + export OS_TENANT_NAME="1234567890123456" + export OS_USERNAME="123abc567xy" + echo "Please enter your OpenStack Password: " + read -sr OS_PASSWORD_INPUT + export OS_PASSWORD=$OS_PASSWORD_INPUT + export OS_REGION_NAME="SBG1" + if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi + +The config file needs to look something like this where $OS_USERNAME +represents the value of the OS_USERNAME variable - 123abc567xy in the +example above. + + [remote] + type = swift + user = $OS_USERNAME + key = $OS_PASSWORD + auth = $OS_AUTH_URL + tenant = $OS_TENANT_NAME + +Note that you may (or may not) need to set region too - try without +first. + +Configuration from the environment + +If you prefer you can configure rclone to use swift using a standard set +of OpenStack environment variables. + +When you run through the config, make sure you choose true for env_auth +and leave everything else blank. + +rclone will then set any empty config parameters from the enviroment +using standard OpenStack environment variables. There is a list of the +variables in the docs for the swift library. + +Using rclone without a config file + +You can use rclone with swift without a config file, if desired, like +this: + + source openstack-credentials-file + export RCLONE_CONFIG_MYREMOTE_TYPE=swift + export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true + rclone lsd myremote: + +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + +Specific options + +Here are the command line options specific to this cloud storage system. + +--swift-chunk-size=SIZE + +Above this size files will be chunked into a _segments container. The +default for this is 5GB which is its maximum value. + +Modified time + +The modified time is stored as metadata on the object as +X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. + +This is a defacto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. + +Limitations + +The Swift API doesn't return a correct MD5SUM for segmented files +(Dynamic or Static Large Objects) so rclone won't check or use the +MD5SUM for these. + +Troubleshooting + +Rclone gives Failed to create file system for "remote:": Bad Request + +Due to an oddity of the underlying swift library, it gives a "Bad +Request" error rather than a more sensible error when the authentication +fails for Swift. + +So this most likely means your username / password is wrong. You can +investigate further with the --dump-bodies flag. + +This may also be caused by specifying the region when you shouldn't have +(eg OVH). + +Rclone gives Failed to create file system: Response didn't have storage storage url and auth token + +This is most likely caused by forgetting to specify your tenant when +setting up a swift remote. + + +SFTP + +SFTP is the Secure (or SSH) File Transfer Protocol. + +It runs over SSH v2 and is standard with most modern SSH installations. + +Paths are specified as remote:path. If the path does not begin with a / +it is relative to the home directory of the user. An empty path remote: +refers to the users home directory. + +Here is an example of making a SFTP configuration. First run + + rclone config + +This will guide you through an interactive setup process. + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / FTP Connection + \ "ftp" + 7 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 8 / Google Drive + \ "drive" + 9 / Hubic + \ "hubic" + 10 / Local Disk + \ "local" + 11 / Microsoft OneDrive + \ "onedrive" + 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 13 / SSH/SFTP Connection + \ "sftp" + 14 / Yandex Disk + \ "yandex" + 15 / http Connection + \ "http" + Storage> sftp + SSH host to connect to + Choose a number from below, or type in your own value + 1 / Connect to example.com + \ "example.com" + host> example.com + SSH username, leave blank for current username, ncw + user> sftpuser + SSH port, leave blank to use default (22) + port> + SSH password, leave blank to use ssh-agent. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank + y/g/n> n + Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. + key_file> + Remote config + -------------------- + [remote] + host = example.com + user = sftpuser + port = + pass = + key_file = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called remote and can now be used like this + +See all directories in the home directory + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:path/to/directory + +List the contents of a directory + + rclone ls remote:path/to/directory + +Sync /home/local/directory to the remote directory, deleting any excess +files in the directory. + + rclone sync /home/local/directory remote:directory + +SSH Authentication + +The SFTP remote supports 3 authentication methods + +- Password +- Key file +- ssh-agent + +Key files should be unencrypted PEM-encoded private key files. For +instance /home/$USER/.ssh/id_rsa. + +If you don't specify pass or key_file then it will attempt to contact an +ssh-agent. + +ssh-agent on macOS + +Note that there seem to be various problems with using an ssh-agent on +macOS due to recent changes in the OS. The most effective work-around +seems to be to start an ssh-agent in each session, eg + + eval `ssh-agent -s` && ssh-add -A + +And then at the end of the session + + eval `ssh-agent -k` + +These commands can be used in scripts of course. + +Modified time + +Modified times are stored on the server to 1 second precision. + +Modified times are used in syncing and are fully supported. + +Limitations + +SFTP supports checksums if the same login has shell access and md5sum or +sha1sum as well as echo are in the remote's PATH. + +The only ssh agent supported under Windows is Putty's pagent. + +SFTP isn't supported under plan9 until this issue is fixed. + +Note that since SFTP isn't HTTP based the following flags don't work +with it: --dump-headers, --dump-bodies, --dump-auth + +Note that --timeout isn't supported (but --contimeout is). + + +Yandex Disk + +Yandex Disk is a cloud storage solution created by Yandex. + +Yandex paths may be as deep as required, eg +remote:directory/subdirectory. + +Here is an example of making a yandex configuration. First run + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + n/s> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + 1 / Amazon Drive + \ "amazon cloud drive" + 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + \ "s3" + 3 / Backblaze B2 + \ "b2" + 4 / Dropbox + \ "dropbox" + 5 / Encrypt/Decrypt a remote + \ "crypt" + 6 / Google Cloud Storage (this is not Google Drive) + \ "google cloud storage" + 7 / Google Drive + \ "drive" + 8 / Hubic + \ "hubic" + 9 / Local Disk + \ "local" + 10 / Microsoft OneDrive + \ "onedrive" + 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + \ "swift" + 12 / SSH/SFTP Connection + \ "sftp" + 13 / Yandex Disk + \ "yandex" + Storage> 13 + Yandex Client Id - leave blank normally. + client_id> + Yandex Client Secret - leave blank normally. + client_secret> + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + client_id = + client_secret = + token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Yandex Disk. This only runs from the moment it +opens your browser to the moment you get back the verification code. +This is on http://127.0.0.1:53682/ and this it may require you to +unblock it temporarily if you are running a host firewall. + +Once configured you can then use rclone like this, + +See top level directories + + rclone lsd remote: + +Make a new directory + + rclone mkdir remote:directory + +List the contents of a directory + + rclone ls remote:directory + +Sync /home/local/directory to the remote path, deleting any excess files +in the path. + + rclone sync /home/local/directory remote:directory + +--fast-list + +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. + +Modified time + +Modified times are supported and are stored accurate to 1 ns in custom +metadata called rclone_modified in RFC3339 with nanoseconds format. + +MD5 checksums + +MD5 checksums are natively supported by Yandex Disk. + +Emptying Trash + +If you wish to empty your trash you can use the rclone cleanup remote: +command which will permanently delete all your trashed files. This +command does not take any path arguments. + + Local Filesystem Local paths are specified as normal filesystem paths, eg @@ -5814,18 +6869,11 @@ and 6 b/two 6 b/one ---no-local-unicode-normalization +--local-no-unicode-normalization -By default rclone normalizes (NFC) the unicode representation of -filenames and directories. This flag disables that normalization and -uses the same representation as the local filesystem. - -This can be useful if you need to retain the local unicode -representation and you are using a cloud provider which supports -unnormalized names (e.g. S3 or ACD). - -This should also work with any provider if you are using crypt and have -file name encryption (the default) or obfuscation turned on. +This flag is deprecated now. Rclone no longer normalizes unicode file +names, but it compares them with unicode normalization in the sync +routine instead. --one-file-system, -x @@ -5861,9 +6909,87 @@ mount to the same device as being on the same filesystem. NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag. +--skip-links + +This flag disables warning messages on skipped symlinks or junction +points, as you explicitly acknowledge that they should be skipped. + Changelog +- v1.38 - 2017-09-30 + - New backends + - Azure Blob Storage (thanks Andrei Dragomir) + - Box + - Onedrive for Business (thanks Oliver Heyme) + - QingStor from QingCloud (thanks wuyu) + - New commands + - rcat - read from standard input and stream upload + - tree - shows a nicely formatted recursive listing + - cryptdecode - decode crypted file names (thanks ishuah) + - config show - print the config file + - config file - print the config file location + - New Features + - Empty directories are deleted on sync + - dedupe - implement merging of duplicate directories + - check and cryptcheck made more consistent and use less memory + - cleanup for remaining remotes (thanks ishuah) + - --immutable for ensuring that files don't change (thanks + Jacob McNamee) + - --user-agent option (thanks Alex McGrath Kraak) + - --disable flag to disable optional features + - --bind flag for choosing the local addr on outgoing connections + - Support for zsh auto-completion (thanks bpicode) + - Stop normalizing file names but do a normalized compare in sync + - Compile + - Update to using go1.9 as the default go version + - Remove snapd build due to maintenance problems + - Bug Fixes + - Improve retriable error detection which makes multipart uploads + better + - Make check obey --ignore-size + - Fix bwlimit toggle in conjunction with schedules + (thanks cbruegg) + - config ensures newly written config is on the same mount + - Local + - Revert to copy when moving file across file system boundaries + - --skip-links to suppress symlink warnings (thanks Zhiming Wang) + - Mount + - Re-use rcat internals to support uploads from all remotes + - Dropbox + - Fix "entry doesn't belong in directory" error + - Stop using deprecated API methods + - Swift + - Fix server side copy to empty container with --fast-list + - Google Drive + - Change the default for --drive-use-trash to true + - S3 + - Set session token when using STS (thanks Girish Ramakrishnan) + - Glacier docs and error messages (thanks Jan Varho) + - Read 1000 (not 1024) items in dir listings to fix Wasabi + - Backblaze B2 + - Fix SHA1 mismatch when downloading files with no SHA1 + - Calculate missing hashes on the fly instead of spooling + - --b2-hard-delete to permanently delete (not hide) files (thanks + John Papandriopoulos) + - Hubic + - Fix creating containers - no longer have to use the default + container + - Swift + - Optionally configure from a standard set of OpenStack + environment vars + - Add endpoint_type config + - Google Cloud Storage + - Fix bucket creation to work with limited permission users + - SFTP + - Implement connection pooling for multiple ssh connections + - Limit new connections per second + - Add support for MD5 and SHA1 hashes where available (thanks + Christian Brüggemann) + - HTTP + - Fix URL encoding issues + - Fix directories with : in + - Fix panic with URL encoded content - v1.37 - 2017-07-22 - New backends - FTP - thanks to Antonio Messina @@ -5879,7 +7005,7 @@ Changelog - This allows remotes to list recursively if they can - This uses less transactions (important if you pay for them) - This may or may not be quicker - - This will user more memory as it has to hold the listing in + - This will use more memory as it has to hold the listing in memory - --old-sync-method deprecated - the remaining uses are covered by --fast-list @@ -6837,6 +7963,19 @@ is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats +tcp lookup some.domain.com no such host + +This happens when rclone cannot resolve a domain. Please check that your +DNS setup is generally working, e.g. + + # both should print a long list of possible IP addresses + dig www.googleapis.com # resolve using your default DNS + dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server + +If you are using systemd-resolved (default on Arch Linux), ensure it is +at version 233 or higher. Previous releases contain a bug which causes +not all domains to be resolved properly. + License @@ -6947,6 +8086,22 @@ Contributors - sainaen sainaen@gmail.com - gdm85 gdm85@users.noreply.github.com - Yaroslav Halchenko debian@onerussian.com +- John Papandriopoulos jpap@users.noreply.github.com +- Zhiming Wang zmwangx@gmail.com +- Andy Pilate cubox@cubox.me +- Oliver Heyme olihey@googlemail.com +- wuyu wuyu@yunify.com +- Andrei Dragomir adragomi@adobe.com +- Christian Brüggemann mail@cbruegg.com +- Alex McGrath Kraak amkdude@gmail.com +- bpicode bjoern.pirnay@googlemail.com +- Daniel Jagszent daniel@jagszent.de +- Josiah White thegenius2009@gmail.com +- Ishuah Kariuki kariuki@ishuah.com ishuah91@gmail.com +- Jan Varho jan@varho.org +- Girish Ramakrishnan girish@cloudron.io +- LingMan LingMan@users.noreply.github.com +- Jacob McNamee jacobmcnamee@gmail.com diff --git a/docs/content/changelog.md b/docs/content/changelog.md index da9b46c15..b0e6d9b48 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -1,12 +1,78 @@ --- title: "Documentation" description: "Rclone Changelog" -date: "2017-07-22" +date: "2017-09-30" --- Changelog --------- + * v1.38 - 2017-09-30 + * New backends + * Azure Blob Storage (thanks Andrei Dragomir) + * Box + * Onedrive for Business (thanks Oliver Heyme) + * QingStor from QingCloud (thanks wuyu) + * New commands + * `rcat` - read from standard input and stream upload + * `tree` - shows a nicely formatted recursive listing + * `cryptdecode` - decode crypted file names (thanks ishuah) + * `config show` - print the config file + * `config file` - print the config file location + * New Features + * Empty directories are deleted on `sync` + * `dedupe` - implement merging of duplicate directories + * `check` and `cryptcheck` made more consistent and use less memory + * `cleanup` for remaining remotes (thanks ishuah) + * `--immutable` for ensuring that files don't change (thanks Jacob McNamee) + * `--user-agent` option (thanks Alex McGrath Kraak) + * `--disable` flag to disable optional features + * `--bind` flag for choosing the local addr on outgoing connections + * Support for zsh auto-completion (thanks bpicode) + * Stop normalizing file names but do a normalized compare in `sync` + * Compile + * Update to using go1.9 as the default go version + * Remove snapd build due to maintenance problems + * Bug Fixes + * Improve retriable error detection which makes multipart uploads better + * Make `check` obey `--ignore-size` + * Fix bwlimit toggle in conjunction with schedules (thanks cbruegg) + * `config` ensures newly written config is on the same mount + * Local + * Revert to copy when moving file across file system boundaries + * `--skip-links` to suppress symlink warnings (thanks Zhiming Wang) + * Mount + * Re-use `rcat` internals to support uploads from all remotes + * Dropbox + * Fix "entry doesn't belong in directory" error + * Stop using deprecated API methods + * Swift + * Fix server side copy to empty container with `--fast-list` + * Google Drive + * Change the default for `--drive-use-trash` to `true` + * S3 + * Set session token when using STS (thanks Girish Ramakrishnan) + * Glacier docs and error messages (thanks Jan Varho) + * Read 1000 (not 1024) items in dir listings to fix Wasabi + * Backblaze B2 + * Fix SHA1 mismatch when downloading files with no SHA1 + * Calculate missing hashes on the fly instead of spooling + * `--b2-hard-delete` to permanently delete (not hide) files (thanks John Papandriopoulos) + * Hubic + * Fix creating containers - no longer have to use the `default` container + * Swift + * Optionally configure from a standard set of OpenStack environment vars + * Add `endpoint_type` config + * Google Cloud Storage + * Fix bucket creation to work with limited permission users + * SFTP + * Implement connection pooling for multiple ssh connections + * Limit new connections per second + * Add support for MD5 and SHA1 hashes where available (thanks Christian Brüggemann) + * HTTP + * Fix URL encoding issues + * Fix directories with `:` in + * Fix panic with URL encoded content * v1.37 - 2017-07-22 * New backends * FTP - thanks to Antonio Messina diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index e2df05c73..05a725c21 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -1,12 +1,12 @@ --- -date: 2017-08-20T10:49:45+02:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone" slug: rclone url: /commands/rclone/ --- ## rclone -Sync files and directories to and from local and remote object stores - v1.37 +Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 ### Synopsis @@ -15,19 +15,22 @@ Sync files and directories to and from local and remote object stores - v1.37 Rclone is a command line program to sync files and directories to and from various cloud storage systems and using file transfer services, such as: - * Google Drive - * Amazon S3 - * Openstack Swift / Rackspace cloud files / Memset Memstore - * Dropbox - * Google Cloud Storage * Amazon Drive - * Microsoft OneDrive - * Hubic + * Amazon S3 * Backblaze B2 - * Yandex Disk - * SFTP + * Box + * Dropbox * FTP + * Google Cloud Storage + * Google Drive * HTTP + * Hubic + * Microsoft Azure Blob Storage + * Microsoft OneDrive + * Openstack Swift / Rackspace cloud files / Memset Memstore + * QingStor + * SFTP + * Yandex Disk * The local filesystem Features @@ -56,11 +59,16 @@ rclone [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -74,6 +82,7 @@ rclone [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -82,7 +91,7 @@ rclone [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -97,10 +106,12 @@ rclone [flags] --filter-from stringArray Read filtering patterns from a file --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). + -h, --help help for rclone --ignore-checksum Skip post copy check of checksums. --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -127,9 +138,11 @@ rclone [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -140,6 +153,7 @@ rclone [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) -V, --version Print the version number ``` @@ -153,6 +167,7 @@ rclone [flags] * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied * [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote. +* [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names. * [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbbox hash file for all the objects in the path. * [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them. * [rclone delete](/commands/rclone_delete/) - Remove the contents of path. @@ -171,11 +186,13 @@ rclone [flags] * [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface. * [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone.conf * [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. +* [rclone rcat](/commands/rclone_rcat/) - Copies standard input to file on remote. * [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty. * [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path. * [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path. * [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path. * [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. +* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone version](/commands/rclone_version/) - Show the version number. -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md index 394b239c7..28a3aeafb 100644 --- a/docs/content/commands/rclone_authorize.md +++ b/docs/content/commands/rclone_authorize.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone authorize" slug: rclone_authorize url: /commands/rclone_authorize/ @@ -17,7 +17,13 @@ rclone from a machine with a browser - use as instructed by rclone config. ``` -rclone authorize +rclone authorize [flags] +``` + +### Options + +``` + -h, --help help for authorize ``` ### Options inherited from parent commands @@ -26,11 +32,16 @@ rclone authorize --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -44,6 +55,7 @@ rclone authorize --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -52,7 +64,7 @@ rclone authorize --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -71,6 +83,7 @@ rclone authorize --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -97,9 +110,11 @@ rclone authorize --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -110,10 +125,11 @@ rclone authorize --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md index e743491db..0aa3bb47f 100644 --- a/docs/content/commands/rclone_cat.md +++ b/docs/content/commands/rclone_cat.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone cat" slug: rclone_cat url: /commands/rclone_cat/ @@ -42,6 +42,7 @@ rclone cat remote:path [flags] --count int Only print N characters. (default -1) --discard Discard the output instead of printing. --head int Only print the first N characters. + -h, --help help for cat --offset int Start printing at offset N (or from end if -ve). --tail int Only print the last N characters. ``` @@ -52,11 +53,16 @@ rclone cat remote:path [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -70,6 +76,7 @@ rclone cat remote:path [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -78,7 +85,7 @@ rclone cat remote:path [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -97,6 +104,7 @@ rclone cat remote:path [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -123,9 +131,11 @@ rclone cat remote:path [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -136,10 +146,11 @@ rclone cat remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md index 1a8414cad..4c7531ddd 100644 --- a/docs/content/commands/rclone_check.md +++ b/docs/content/commands/rclone_check.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone check" slug: rclone_check url: /commands/rclone_check/ @@ -33,6 +33,7 @@ rclone check source:path dest:path [flags] ``` --download Check by downloading rather than with hash. + -h, --help help for check ``` ### Options inherited from parent commands @@ -41,11 +42,16 @@ rclone check source:path dest:path [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -59,6 +65,7 @@ rclone check source:path dest:path [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -67,7 +74,7 @@ rclone check source:path dest:path [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -86,6 +93,7 @@ rclone check source:path dest:path [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -112,9 +120,11 @@ rclone check source:path dest:path [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -125,10 +135,11 @@ rclone check source:path dest:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md index af771ad07..a057e88bf 100644 --- a/docs/content/commands/rclone_cleanup.md +++ b/docs/content/commands/rclone_cleanup.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone cleanup" slug: rclone_cleanup url: /commands/rclone_cleanup/ @@ -17,7 +17,13 @@ versions. Not supported by all remotes. ``` -rclone cleanup remote:path +rclone cleanup remote:path [flags] +``` + +### Options + +``` + -h, --help help for cleanup ``` ### Options inherited from parent commands @@ -26,11 +32,16 @@ rclone cleanup remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -44,6 +55,7 @@ rclone cleanup remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -52,7 +64,7 @@ rclone cleanup remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -71,6 +83,7 @@ rclone cleanup remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -97,9 +110,11 @@ rclone cleanup remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -110,10 +125,11 @@ rclone cleanup remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md index 399c28d99..1fa2f4199 100644 --- a/docs/content/commands/rclone_config.md +++ b/docs/content/commands/rclone_config.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone config" slug: rclone_config url: /commands/rclone_config/ @@ -11,10 +11,26 @@ Enter an interactive configuration session. ### Synopsis -Enter an interactive configuration session. +`rclone config` + enters an interactive configuration sessions where you can setup +new remotes and manage existing ones. You may also set or remove a password to +protect your configuration. + +Additional functions: + + * `rclone config edit` – same as above + * `rclone config file` – show path of configuration file in use + * `rclone config show` – print (decrypted) config file + ``` -rclone config +rclone config [function] [flags] +``` + +### Options + +``` + -h, --help help for config ``` ### Options inherited from parent commands @@ -23,11 +39,16 @@ rclone config --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +62,7 @@ rclone config --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +71,7 @@ rclone config --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +90,7 @@ rclone config --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +117,11 @@ rclone config --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +132,11 @@ rclone config --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index 453be7085..f9f2e7fdb 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone copy" slug: rclone_copy url: /commands/rclone_copy/ @@ -53,7 +53,13 @@ the destination directory or not. ``` -rclone copy source:path dest:path +rclone copy source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for copy ``` ### Options inherited from parent commands @@ -62,11 +68,16 @@ rclone copy source:path dest:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -80,6 +91,7 @@ rclone copy source:path dest:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -88,7 +100,7 @@ rclone copy source:path dest:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -107,6 +119,7 @@ rclone copy source:path dest:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -133,9 +146,11 @@ rclone copy source:path dest:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -146,10 +161,11 @@ rclone copy source:path dest:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index 2cb9bf1f2..d5444cb5d 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone copyto" slug: rclone_copyto url: /commands/rclone_copyto/ @@ -40,7 +40,13 @@ destination. ``` -rclone copyto source:path dest:path +rclone copyto source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for copyto ``` ### Options inherited from parent commands @@ -49,11 +55,16 @@ rclone copyto source:path dest:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -67,6 +78,7 @@ rclone copyto source:path dest:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -75,7 +87,7 @@ rclone copyto source:path dest:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -94,6 +106,7 @@ rclone copyto source:path dest:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -120,9 +133,11 @@ rclone copyto source:path dest:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -133,10 +148,11 @@ rclone copyto source:path dest:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md index 7072be5b2..a15f742ac 100644 --- a/docs/content/commands/rclone_cryptcheck.md +++ b/docs/content/commands/rclone_cryptcheck.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone cryptcheck" slug: rclone_cryptcheck url: /commands/rclone_cryptcheck/ @@ -37,7 +37,13 @@ After it has run it will log the status of the encryptedremote:. ``` -rclone cryptcheck remote:path cryptedremote:path +rclone cryptcheck remote:path cryptedremote:path [flags] +``` + +### Options + +``` + -h, --help help for cryptcheck ``` ### Options inherited from parent commands @@ -46,11 +52,16 @@ rclone cryptcheck remote:path cryptedremote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -64,6 +75,7 @@ rclone cryptcheck remote:path cryptedremote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -72,7 +84,7 @@ rclone cryptcheck remote:path cryptedremote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -91,6 +103,7 @@ rclone cryptcheck remote:path cryptedremote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -117,9 +130,11 @@ rclone cryptcheck remote:path cryptedremote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -130,10 +145,11 @@ rclone cryptcheck remote:path cryptedremote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md new file mode 100644 index 000000000..dc462542e --- /dev/null +++ b/docs/content/commands/rclone_cryptdecode.md @@ -0,0 +1,139 @@ +--- +date: 2017-09-30T14:20:12+01:00 +title: "rclone cryptdecode" +slug: rclone_cryptdecode +url: /commands/rclone_cryptdecode/ +--- +## rclone cryptdecode + +Cryptdecode returns unencrypted file names. + +### Synopsis + + + +rclone cryptdecode returns unencrypted file names when provided with +a list of encrypted file names. List limit is 10 items. + +use it like this + + rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 + + +``` +rclone cryptdecode encryptedremote: encryptedfilename [flags] +``` + +### Options + +``` + -h, --help help for cryptdecode +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-trashed-only Only show files that are in the trash + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-auth Dump HTTP headers with auth info + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). + --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). + --ignore-checksum Skip post copy check of checksums. + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --old-sync-method Deprecated - use --fast-list instead + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") + -v, --verbose count[=-1] Print lots more stuff (repeat for more) +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 + +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md index a6dcf1654..ad345f760 100644 --- a/docs/content/commands/rclone_dbhashsum.md +++ b/docs/content/commands/rclone_dbhashsum.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone dbhashsum" slug: rclone_dbhashsum url: /commands/rclone_dbhashsum/ @@ -19,7 +19,13 @@ The output is in the same format as md5sum and sha1sum. ``` -rclone dbhashsum remote:path +rclone dbhashsum remote:path [flags] +``` + +### Options + +``` + -h, --help help for dbhashsum ``` ### Options inherited from parent commands @@ -28,11 +34,16 @@ rclone dbhashsum remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -46,6 +57,7 @@ rclone dbhashsum remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -54,7 +66,7 @@ rclone dbhashsum remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -73,6 +85,7 @@ rclone dbhashsum remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -99,9 +112,11 @@ rclone dbhashsum remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -112,10 +127,11 @@ rclone dbhashsum remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md index f2e0c3893..e57828460 100644 --- a/docs/content/commands/rclone_dedupe.md +++ b/docs/content/commands/rclone_dedupe.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone dedupe" slug: rclone_dedupe url: /commands/rclone_dedupe/ @@ -12,10 +12,14 @@ Interactively find duplicate files delete/rename them. -By default `dedup` interactively finds duplicate files and offers to +By default `dedupe` interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names. +In the first pass it will merge directories with the same name. It +will do this iteratively until all the identical directories have been +merged. + The `dedupe` command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the `dedupe` command will not be interactive. You @@ -96,6 +100,7 @@ rclone dedupe [mode] remote:path [flags] ``` --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive") + -h, --help help for dedupe ``` ### Options inherited from parent commands @@ -104,11 +109,16 @@ rclone dedupe [mode] remote:path [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -122,6 +132,7 @@ rclone dedupe [mode] remote:path [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -130,7 +141,7 @@ rclone dedupe [mode] remote:path [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -149,6 +160,7 @@ rclone dedupe [mode] remote:path [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -175,9 +187,11 @@ rclone dedupe [mode] remote:path [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -188,10 +202,11 @@ rclone dedupe [mode] remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md index 5e7159dde..294a44a3a 100644 --- a/docs/content/commands/rclone_delete.md +++ b/docs/content/commands/rclone_delete.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone delete" slug: rclone_delete url: /commands/rclone_delete/ @@ -31,7 +31,13 @@ delete all files bigger than 100MBytes. ``` -rclone delete remote:path +rclone delete remote:path [flags] +``` + +### Options + +``` + -h, --help help for delete ``` ### Options inherited from parent commands @@ -40,11 +46,16 @@ rclone delete remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -58,6 +69,7 @@ rclone delete remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -66,7 +78,7 @@ rclone delete remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -85,6 +97,7 @@ rclone delete remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -111,9 +124,11 @@ rclone delete remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -124,10 +139,11 @@ rclone delete remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md index f519a45dd..94679da68 100644 --- a/docs/content/commands/rclone_genautocomplete.md +++ b/docs/content/commands/rclone_genautocomplete.md @@ -1,5 +1,5 @@ --- -date: 2017-08-20T10:49:45+02:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone genautocomplete" slug: rclone_genautocomplete url: /commands/rclone_genautocomplete/ @@ -28,11 +28,16 @@ Run with --help to list the supported shells. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -46,6 +51,7 @@ Run with --help to list the supported shells. --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -54,7 +60,7 @@ Run with --help to list the supported shells. --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -73,6 +79,7 @@ Run with --help to list the supported shells. --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -99,9 +106,11 @@ Run with --help to list the supported shells. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -112,12 +121,13 @@ Run with --help to list the supported shells. --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37-099-gb78ecb15-zsh-completion +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 * [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. -###### Auto generated by spf13/cobra on 20-Aug-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_genautocomplete_bash.md b/docs/content/commands/rclone_genautocomplete_bash.md index 1c36655a0..7bb7c9eac 100644 --- a/docs/content/commands/rclone_genautocomplete_bash.md +++ b/docs/content/commands/rclone_genautocomplete_bash.md @@ -1,5 +1,5 @@ --- -date: 2017-08-20T10:49:45+02:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone genautocomplete bash" slug: rclone_genautocomplete_bash url: /commands/rclone_genautocomplete_bash/ @@ -44,11 +44,16 @@ rclone genautocomplete bash [output_file] [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -62,6 +67,7 @@ rclone genautocomplete bash [output_file] [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -70,7 +76,7 @@ rclone genautocomplete bash [output_file] [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -89,6 +95,7 @@ rclone genautocomplete bash [output_file] [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -115,9 +122,11 @@ rclone genautocomplete bash [output_file] [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -128,10 +137,11 @@ rclone genautocomplete bash [output_file] [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 20-Aug-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_genautocomplete_zsh.md b/docs/content/commands/rclone_genautocomplete_zsh.md index 0b1c6bf2b..ca5871f3e 100644 --- a/docs/content/commands/rclone_genautocomplete_zsh.md +++ b/docs/content/commands/rclone_genautocomplete_zsh.md @@ -1,5 +1,5 @@ --- -date: 2017-08-20T10:49:45+02:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone genautocomplete zsh" slug: rclone_genautocomplete_zsh url: /commands/rclone_genautocomplete_zsh/ @@ -44,11 +44,16 @@ rclone genautocomplete zsh [output_file] [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -62,6 +67,7 @@ rclone genautocomplete zsh [output_file] [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -70,7 +76,7 @@ rclone genautocomplete zsh [output_file] [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -89,6 +95,7 @@ rclone genautocomplete zsh [output_file] [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -115,9 +122,11 @@ rclone genautocomplete zsh [output_file] [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -128,10 +137,11 @@ rclone genautocomplete zsh [output_file] [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 20-Aug-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md index e6b45d528..ff532937f 100644 --- a/docs/content/commands/rclone_gendocs.md +++ b/docs/content/commands/rclone_gendocs.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone gendocs" slug: rclone_gendocs url: /commands/rclone_gendocs/ @@ -32,11 +32,16 @@ rclone gendocs output_directory [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -50,6 +55,7 @@ rclone gendocs output_directory [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -58,7 +64,7 @@ rclone gendocs output_directory [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -77,6 +83,7 @@ rclone gendocs output_directory [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -103,9 +110,11 @@ rclone gendocs output_directory [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -116,10 +125,11 @@ rclone gendocs output_directory [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index 15b1a2d0e..e0e014557 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone listremotes" slug: rclone_listremotes url: /commands/rclone_listremotes/ @@ -24,6 +24,7 @@ rclone listremotes [flags] ### Options ``` + -h, --help help for listremotes -l, --long Show the type as well as names. ``` @@ -33,11 +34,16 @@ rclone listremotes [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -51,6 +57,7 @@ rclone listremotes [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -59,7 +66,7 @@ rclone listremotes [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -78,6 +85,7 @@ rclone listremotes [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -104,9 +112,11 @@ rclone listremotes [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -117,10 +127,11 @@ rclone listremotes [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md index 7bcaa89c1..363dc48b3 100644 --- a/docs/content/commands/rclone_ls.md +++ b/docs/content/commands/rclone_ls.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone ls" slug: rclone_ls url: /commands/rclone_ls/ @@ -14,7 +14,13 @@ List all the objects in the path with size and path. List all the objects in the path with size and path. ``` -rclone ls remote:path +rclone ls remote:path [flags] +``` + +### Options + +``` + -h, --help help for ls ``` ### Options inherited from parent commands @@ -23,11 +29,16 @@ rclone ls remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +52,7 @@ rclone ls remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +61,7 @@ rclone ls remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +80,7 @@ rclone ls remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +107,11 @@ rclone ls remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +122,11 @@ rclone ls remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md index 4957775ee..771550b20 100644 --- a/docs/content/commands/rclone_lsd.md +++ b/docs/content/commands/rclone_lsd.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone lsd" slug: rclone_lsd url: /commands/rclone_lsd/ @@ -14,7 +14,13 @@ List all directories/containers/buckets in the path. List all directories/containers/buckets in the path. ``` -rclone lsd remote:path +rclone lsd remote:path [flags] +``` + +### Options + +``` + -h, --help help for lsd ``` ### Options inherited from parent commands @@ -23,11 +29,16 @@ rclone lsd remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +52,7 @@ rclone lsd remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +61,7 @@ rclone lsd remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +80,7 @@ rclone lsd remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +107,11 @@ rclone lsd remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +122,11 @@ rclone lsd remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md index ec5e73613..438789e58 100644 --- a/docs/content/commands/rclone_lsjson.md +++ b/docs/content/commands/rclone_lsjson.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone lsjson" slug: rclone_lsjson url: /commands/rclone_lsjson/ @@ -46,6 +46,7 @@ rclone lsjson remote:path [flags] ``` --hash Include hashes in the output (may take longer). + -h, --help help for lsjson --no-modtime Don't read the modification time (can speed things up). -R, --recursive Recurse into the listing. ``` @@ -56,11 +57,16 @@ rclone lsjson remote:path [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -74,6 +80,7 @@ rclone lsjson remote:path [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -82,7 +89,7 @@ rclone lsjson remote:path [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -101,6 +108,7 @@ rclone lsjson remote:path [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -127,9 +135,11 @@ rclone lsjson remote:path [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -140,10 +150,11 @@ rclone lsjson remote:path [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md index 28f258596..130c4a456 100644 --- a/docs/content/commands/rclone_lsl.md +++ b/docs/content/commands/rclone_lsl.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone lsl" slug: rclone_lsl url: /commands/rclone_lsl/ @@ -14,7 +14,13 @@ List all the objects path with modification time, size and path. List all the objects path with modification time, size and path. ``` -rclone lsl remote:path +rclone lsl remote:path [flags] +``` + +### Options + +``` + -h, --help help for lsl ``` ### Options inherited from parent commands @@ -23,11 +29,16 @@ rclone lsl remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +52,7 @@ rclone lsl remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +61,7 @@ rclone lsl remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +80,7 @@ rclone lsl remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +107,11 @@ rclone lsl remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +122,11 @@ rclone lsl remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md index ef4e8329a..c040f10a5 100644 --- a/docs/content/commands/rclone_md5sum.md +++ b/docs/content/commands/rclone_md5sum.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone md5sum" slug: rclone_md5sum url: /commands/rclone_md5sum/ @@ -17,7 +17,13 @@ is in the same format as the standard md5sum tool produces. ``` -rclone md5sum remote:path +rclone md5sum remote:path [flags] +``` + +### Options + +``` + -h, --help help for md5sum ``` ### Options inherited from parent commands @@ -26,11 +32,16 @@ rclone md5sum remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -44,6 +55,7 @@ rclone md5sum remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -52,7 +64,7 @@ rclone md5sum remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -71,6 +83,7 @@ rclone md5sum remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -97,9 +110,11 @@ rclone md5sum remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -110,10 +125,11 @@ rclone md5sum remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md index 837a3b8e9..ab8028bba 100644 --- a/docs/content/commands/rclone_mkdir.md +++ b/docs/content/commands/rclone_mkdir.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone mkdir" slug: rclone_mkdir url: /commands/rclone_mkdir/ @@ -14,7 +14,13 @@ Make the path if it doesn't already exist. Make the path if it doesn't already exist. ``` -rclone mkdir remote:path +rclone mkdir remote:path [flags] +``` + +### Options + +``` + -h, --help help for mkdir ``` ### Options inherited from parent commands @@ -23,11 +29,16 @@ rclone mkdir remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +52,7 @@ rclone mkdir remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +61,7 @@ rclone mkdir remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +80,7 @@ rclone mkdir remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +107,11 @@ rclone mkdir remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +122,11 @@ rclone mkdir remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index 2b8c96abb..3144dc5a1 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone mount" slug: rclone_mount url: /commands/rclone_mount/ @@ -39,6 +39,34 @@ When that happens, it is the user's responsibility to stop the mount manually wi # OS X umount /path/to/local/mount +### Installing on Windows ### + +To run rclone mount on Windows, you will need to +download and install [WinFsp](http://www.secfs.net/winfsp/). + +WinFsp is an [open source](https://github.com/billziss-gh/winfsp) +Windows File System Proxy which makes it easy to write user space file +systems for Windows. It provides a FUSE emulation layer which rclone +uses combination with +[cgofuse](https://github.com/billziss-gh/cgofuse). Both of these +packages are by Bill Zissimopoulos who was very helpful during the +implementation of rclone mount for Windows. + +#### Windows caveats #### + +Note that drives created as Administrator are not visible by other +accounts (including the account that was elevated as +Administrator). So if you start a Windows drive from an Administrative +Command Prompt and then try to access the same drive from Explorer +(which does not run as Administrator), you will not be able to see the +new drive. + +The easiest way around this is to start the drive from a normal +command prompt. It is also possible to start a drive from the SYSTEM +account (using [the WinFsp.Launcher +infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)) +which creates drives accessible for everyone on the system. + ### Limitations ### This can only write files seqentially, it can only seek when reading. @@ -84,13 +112,6 @@ like this: kill -SIGHUP $(pidof rclone) -### Bugs ### - - * All the remotes should work for read, but some may not for write - * those which need to know the size in advance won't - eg B2 - * maybe should pass in size as -1 to mean work it out - * Or put in an an upload cache to cache the files on disk first - ``` rclone mount remote:path /path/to/mountpoint [flags] @@ -107,6 +128,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required. --gid uint32 Override the gid field set by the filesystem. (default 502) + -h, --help help for mount --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k) --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). @@ -125,11 +147,16 @@ rclone mount remote:path /path/to/mountpoint [flags] --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -143,6 +170,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -151,7 +179,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -170,6 +198,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -196,9 +225,11 @@ rclone mount remote:path /path/to/mountpoint [flags] --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -209,10 +240,11 @@ rclone mount remote:path /path/to/mountpoint [flags] --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index 5c827c8d3..1524d8e01 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone move" slug: rclone_move url: /commands/rclone_move/ @@ -31,7 +31,13 @@ into `dest:path` then delete the original (if no errors on copy) in ``` -rclone move source:path dest:path +rclone move source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for move ``` ### Options inherited from parent commands @@ -40,11 +46,16 @@ rclone move source:path dest:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -58,6 +69,7 @@ rclone move source:path dest:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -66,7 +78,7 @@ rclone move source:path dest:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -85,6 +97,7 @@ rclone move source:path dest:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -111,9 +124,11 @@ rclone move source:path dest:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -124,10 +139,11 @@ rclone move source:path dest:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index fc1054f19..834d6d683 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone moveto" slug: rclone_moveto url: /commands/rclone_moveto/ @@ -43,7 +43,13 @@ transfer. ``` -rclone moveto source:path dest:path +rclone moveto source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for moveto ``` ### Options inherited from parent commands @@ -52,11 +58,16 @@ rclone moveto source:path dest:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -70,6 +81,7 @@ rclone moveto source:path dest:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -78,7 +90,7 @@ rclone moveto source:path dest:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -97,6 +109,7 @@ rclone moveto source:path dest:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -123,9 +136,11 @@ rclone moveto source:path dest:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -136,10 +151,11 @@ rclone moveto source:path dest:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md index 080bb3120..acb91d4a9 100644 --- a/docs/content/commands/rclone_ncdu.md +++ b/docs/content/commands/rclone_ncdu.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone ncdu" slug: rclone_ncdu url: /commands/rclone_ncdu/ @@ -38,7 +38,13 @@ importantly deleting files, but is useful as it stands. ``` -rclone ncdu remote:path +rclone ncdu remote:path [flags] +``` + +### Options + +``` + -h, --help help for ncdu ``` ### Options inherited from parent commands @@ -47,11 +53,16 @@ rclone ncdu remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -65,6 +76,7 @@ rclone ncdu remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -73,7 +85,7 @@ rclone ncdu remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -92,6 +104,7 @@ rclone ncdu remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -118,9 +131,11 @@ rclone ncdu remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -131,10 +146,11 @@ rclone ncdu remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md index 71cd631ec..2930e6498 100644 --- a/docs/content/commands/rclone_obscure.md +++ b/docs/content/commands/rclone_obscure.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone obscure" slug: rclone_obscure url: /commands/rclone_obscure/ @@ -14,7 +14,13 @@ Obscure password for use in the rclone.conf Obscure password for use in the rclone.conf ``` -rclone obscure password +rclone obscure password [flags] +``` + +### Options + +``` + -h, --help help for obscure ``` ### Options inherited from parent commands @@ -23,11 +29,16 @@ rclone obscure password --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +52,7 @@ rclone obscure password --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +61,7 @@ rclone obscure password --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +80,7 @@ rclone obscure password --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +107,11 @@ rclone obscure password --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +122,11 @@ rclone obscure password --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md index 88a7e9ef8..90882aa9a 100644 --- a/docs/content/commands/rclone_purge.md +++ b/docs/content/commands/rclone_purge.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone purge" slug: rclone_purge url: /commands/rclone_purge/ @@ -18,7 +18,13 @@ you want to selectively delete files. ``` -rclone purge remote:path +rclone purge remote:path [flags] +``` + +### Options + +``` + -h, --help help for purge ``` ### Options inherited from parent commands @@ -27,11 +33,16 @@ rclone purge remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -45,6 +56,7 @@ rclone purge remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -53,7 +65,7 @@ rclone purge remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -72,6 +84,7 @@ rclone purge remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -98,9 +111,11 @@ rclone purge remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -111,10 +126,11 @@ rclone purge remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md new file mode 100644 index 000000000..c549cf2cc --- /dev/null +++ b/docs/content/commands/rclone_rcat.md @@ -0,0 +1,154 @@ +--- +date: 2017-09-30T14:20:12+01:00 +title: "rclone rcat" +slug: rclone_rcat +url: /commands/rclone_rcat/ +--- +## rclone rcat + +Copies standard input to file on remote. + +### Synopsis + + + +rclone rcat reads from standard input (stdin) and copies it to a +single remote file. + + echo "hello world" | rclone rcat remote:path/to/file + ffmpeg - | rclone rcat --checksum remote:path/to/file + +If the remote file already exists, it will be overwritten. + +rcat will try to upload small files in a single request, which is +usually more efficient than the streaming/chunked upload endpoints, +which use multiple requests. Exact behaviour depends on the remote. +What is considered a small file may be set through +`--streaming-upload-cutoff`. Uploading only starts after +the cutoff is reached or if the file ends before that. The data +must fit into RAM. The cutoff needs to be small enough to adhere +the limits of your remote, please see there. Generally speaking, +setting this cutoff too high will decrease your performance. + +Note that the upload can also not be retried because the data is +not kept around until the upload succeeds. If you need to transfer +a lot of data, you're better off caching locally and then +`rclone move` it to the destination. + +``` +rclone rcat remote:path [flags] +``` + +### Options + +``` + -h, --help help for rcat +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-trashed-only Only show files that are in the trash + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-auth Dump HTTP headers with auth info + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). + --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). + --ignore-checksum Skip post copy check of checksums. + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --old-sync-method Deprecated - use --fast-list instead + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") + -v, --verbose count[=-1] Print lots more stuff (repeat for more) +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 + +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md index c1318c5d8..f0180cf54 100644 --- a/docs/content/commands/rclone_rmdir.md +++ b/docs/content/commands/rclone_rmdir.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone rmdir" slug: rclone_rmdir url: /commands/rclone_rmdir/ @@ -16,7 +16,13 @@ Remove the path. Note that you can't remove a path with objects in it, use purge for that. ``` -rclone rmdir remote:path +rclone rmdir remote:path [flags] +``` + +### Options + +``` + -h, --help help for rmdir ``` ### Options inherited from parent commands @@ -25,11 +31,16 @@ rclone rmdir remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -43,6 +54,7 @@ rclone rmdir remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -51,7 +63,7 @@ rclone rmdir remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -70,6 +82,7 @@ rclone rmdir remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -96,9 +109,11 @@ rclone rmdir remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -109,10 +124,11 @@ rclone rmdir remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index 130eb59d5..1afaf7092 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone rmdirs" slug: rclone_rmdirs url: /commands/rclone_rmdirs/ @@ -21,7 +21,13 @@ empty directories in. ``` -rclone rmdirs remote:path +rclone rmdirs remote:path [flags] +``` + +### Options + +``` + -h, --help help for rmdirs ``` ### Options inherited from parent commands @@ -30,11 +36,16 @@ rclone rmdirs remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -48,6 +59,7 @@ rclone rmdirs remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -56,7 +68,7 @@ rclone rmdirs remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -75,6 +87,7 @@ rclone rmdirs remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -101,9 +114,11 @@ rclone rmdirs remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -114,10 +129,11 @@ rclone rmdirs remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md index 775781424..6806b0a7c 100644 --- a/docs/content/commands/rclone_sha1sum.md +++ b/docs/content/commands/rclone_sha1sum.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone sha1sum" slug: rclone_sha1sum url: /commands/rclone_sha1sum/ @@ -17,7 +17,13 @@ is in the same format as the standard sha1sum tool produces. ``` -rclone sha1sum remote:path +rclone sha1sum remote:path [flags] +``` + +### Options + +``` + -h, --help help for sha1sum ``` ### Options inherited from parent commands @@ -26,11 +32,16 @@ rclone sha1sum remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -44,6 +55,7 @@ rclone sha1sum remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -52,7 +64,7 @@ rclone sha1sum remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -71,6 +83,7 @@ rclone sha1sum remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -97,9 +110,11 @@ rclone sha1sum remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -110,10 +125,11 @@ rclone sha1sum remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md index a067a5623..5996722e9 100644 --- a/docs/content/commands/rclone_size.md +++ b/docs/content/commands/rclone_size.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone size" slug: rclone_size url: /commands/rclone_size/ @@ -14,7 +14,13 @@ Prints the total size and number of objects in remote:path. Prints the total size and number of objects in remote:path. ``` -rclone size remote:path +rclone size remote:path [flags] +``` + +### Options + +``` + -h, --help help for size ``` ### Options inherited from parent commands @@ -23,11 +29,16 @@ rclone size remote:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +52,7 @@ rclone size remote:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +61,7 @@ rclone size remote:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +80,7 @@ rclone size remote:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +107,11 @@ rclone size remote:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +122,11 @@ rclone size remote:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index 1ac675e12..dcf59b5fe 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone sync" slug: rclone_sync url: /commands/rclone_sync/ @@ -33,7 +33,13 @@ go there. ``` -rclone sync source:path dest:path +rclone sync source:path dest:path [flags] +``` + +### Options + +``` + -h, --help help for sync ``` ### Options inherited from parent commands @@ -42,11 +48,16 @@ rclone sync source:path dest:path --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -60,6 +71,7 @@ rclone sync source:path dest:path --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -68,7 +80,7 @@ rclone sync source:path dest:path --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -87,6 +99,7 @@ rclone sync source:path dest:path --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -113,9 +126,11 @@ rclone sync source:path dest:path --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -126,10 +141,11 @@ rclone sync source:path dest:path --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md new file mode 100644 index 000000000..b978036bd --- /dev/null +++ b/docs/content/commands/rclone_tree.md @@ -0,0 +1,175 @@ +--- +date: 2017-09-30T14:20:12+01:00 +title: "rclone tree" +slug: rclone_tree +url: /commands/rclone_tree/ +--- +## rclone tree + +List the contents of the remote in a tree like fashion. + +### Synopsis + + + +rclone tree lists the contents of a remote in a similar way to the +unix tree command. + +For example + + $ rclone tree remote:path + / + ├── file1 + ├── file2 + ├── file3 + └── subdir + ├── file4 + └── file5 + + 1 directories, 5 files + +You can use any of the filtering options with the tree command (eg +--include and --exclude). You can also use --fast-list. + +The tree command has many options for controlling the listing which +are compatible with the tree command. Note that not all of them have +short options as they conflict with rclone's short options. + + +``` +rclone tree remote:path [flags] +``` + +### Options + +``` + -a, --all All files are listed (list . files too). + -C, --color Turn colorization on always. + -d, --dirs-only List directories only. + --dirsfirst List directories before files (-U disables). + --full-path Print the full path prefix for each file. + -h, --help help for tree + --human Print the size in a more human readable way. + --level int Descend only level directories deep. + -D, --modtime Print the date of last modification. + -i, --noindent Don't print indentation lines. + --noreport Turn off file/directory count at end of tree listing. + -o, --output string Output to file instead of stdout. + -p, --protections Print the protections for each file. + -Q, --quote Quote filenames with double quotes. + -s, --size Print the size in bytes of each file. + --sort string Select sort: name,version,size,mtime,ctime. + --sort-ctime Sort files by last status change time. + -t, --sort-modtime Sort files by last modification time. + -r, --sort-reverse Reverse the order of the sort. + -U, --unsorted Leave files unsorted. + --version Sort files alphanumerically by version. +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) + --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) + --buffer-size int Buffer size when copying files. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + -L, --copy-links Follow symlinks and copy the pointed to item. + --cpuprofile string Write cpu profile to file + --crypt-show-mapping For all files listed show how the names encrypt. + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-shared-with-me Only show files that are shared with me + --drive-skip-gdocs Skip google documents in all listings. + --drive-trashed-only Only show files that are in the trash + --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-auth Dump HTTP headers with auth info + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). + --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). + --ignore-checksum Skip post copy check of checksums. + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames + --log-file string Log everything to this file + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --old-sync-method Deprecated - use --fast-list instead + -x, --one-file-system Don't cross filesystem boundaries. + --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 + --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) + --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix for use with --backup-dir. + --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") + -v, --verbose count[=-1] Print lots more stuff (repeat for more) +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 + +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md index 38ae5a4ed..615ae8482 100644 --- a/docs/content/commands/rclone_version.md +++ b/docs/content/commands/rclone_version.md @@ -1,5 +1,5 @@ --- -date: 2017-07-22T18:15:25+01:00 +date: 2017-09-30T14:20:12+01:00 title: "rclone version" slug: rclone_version url: /commands/rclone_version/ @@ -14,7 +14,13 @@ Show the version number. Show the version number. ``` -rclone version +rclone version [flags] +``` + +### Options + +``` + -h, --help help for version ``` ### Options inherited from parent commands @@ -23,11 +29,16 @@ rclone version --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --ask-password Allow prompt for password for encrypted configuration. (default true) + --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) + --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --b2-versions Include old versions in directory listings. --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --buffer-size int Buffer size when copying files. (default 16M) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --checkers int Number of checkers to run in parallel. (default 8) @@ -41,6 +52,7 @@ rclone version --delete-before When synchronizing, delete files on destination before transfering --delete-during When synchronizing, delete files during transfer (default) --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. --drive-auth-owner-only Only consider files owned by the authenticated user. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") @@ -49,7 +61,7 @@ rclone version --drive-skip-gdocs Skip google documents in all listings. --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) - --drive-use-trash Send files to the trash instead of deleting permanently. + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M) -n, --dry-run Do a trial run with no permanent changes --dump-auth Dump HTTP headers with auth info @@ -68,6 +80,7 @@ rclone version --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use mod-time or checksum. -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. --include stringArray Include files matching pattern --include-from stringArray Read include patterns from file --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames @@ -94,9 +107,11 @@ rclone version --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA) --size-only Skip based on size only, not mod-time or checksum + --skip-links Don't warn about skipped symlinks. --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --suffix string Suffix for use with --backup-dir. --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --syslog Use Syslog for logging @@ -107,10 +122,11 @@ rclone version --track-renames When synchronizing, track file renames and do a server side move if possible --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-001-gda4e1b84") -v, --verbose count[=-1] Print lots more stuff (repeat for more) ``` ### SEE ALSO -* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37 +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38-001-gda4e1b84 -###### Auto generated by spf13/cobra on 22-Jul-2017 +###### Auto generated by spf13/cobra on 30-Sep-2017 diff --git a/docs/layouts/shortcodes/version.html b/docs/layouts/shortcodes/version.html index 5281a274b..f3e194118 100644 --- a/docs/layouts/shortcodes/version.html +++ b/docs/layouts/shortcodes/version.html @@ -1 +1 @@ -v1.37 +v1.38 \ No newline at end of file diff --git a/fs/version.go b/fs/version.go index d8bdba2fc..b1744b0f1 100644 --- a/fs/version.go +++ b/fs/version.go @@ -1,4 +1,4 @@ package fs // Version of rclone -var Version = "v1.37-DEV" +var Version = "v1.38" diff --git a/rclone.1 b/rclone.1 index f126ba139..d04013702 100644 --- a/rclone.1 +++ b/rclone.1 @@ -1,41 +1,63 @@ .\"t .\" Automatically generated by Pandoc 1.17.2 .\" -.TH "rclone" "1" "Jul 22, 2017" "User Manual" "" +.TH "rclone" "1" "Sep 30, 2017" "User Manual" "" .hy .SH Rclone .PP [IMAGE: Logo (https://rclone.org/img/rclone-120x120.png)] (https://rclone.org/) .PP Rclone is a command line program to sync files and directories to and -from -.IP \[bu] 2 -Google Drive -.IP \[bu] 2 -Amazon S3 -.IP \[bu] 2 -Openstack Swift / Rackspace cloud files / Memset Memstore -.IP \[bu] 2 -Dropbox -.IP \[bu] 2 -Google Cloud Storage +from: .IP \[bu] 2 Amazon Drive .IP \[bu] 2 -Microsoft OneDrive -.IP \[bu] 2 -Hubic +Amazon S3 .IP \[bu] 2 Backblaze B2 .IP \[bu] 2 -Yandex Disk +Box .IP \[bu] 2 -SFTP +Ceph +.IP \[bu] 2 +Dreamhost +.IP \[bu] 2 +Dropbox .IP \[bu] 2 FTP .IP \[bu] 2 +Google Cloud Storage +.IP \[bu] 2 +Google Drive +.IP \[bu] 2 HTTP .IP \[bu] 2 +Hubic +.IP \[bu] 2 +Memset Memstore +.IP \[bu] 2 +Microsoft Azure Blob Storage +.IP \[bu] 2 +Microsoft OneDrive +.IP \[bu] 2 +Minio +.IP \[bu] 2 +OVH +.IP \[bu] 2 +Openstack Swift +.IP \[bu] 2 +Oracle Cloud Storage +.IP \[bu] 2 +QingStor +.IP \[bu] 2 +Rackspace Cloud Files +.IP \[bu] 2 +SFTP +.IP \[bu] 2 +Wasabi +.IP \[bu] 2 +Yandex Disk +.IP \[bu] 2 The local filesystem .PP Features @@ -153,10 +175,14 @@ You will be prompted for your password. .IP .nf \f[C] +sudo\ mkdir\ \-p\ /usr/local/bin sudo\ mv\ rclone\ /usr/local/bin/ \f[] .fi .PP +(the \f[C]mkdir\f[] command is safe to run, even if the directory +already exists). +.PP Remove the leftover files. .IP .nf @@ -212,85 +238,6 @@ add the role to the hosts you want rclone installed to: \ \ \ \ \ \ \ \ \ \ \-\ rclone \f[] .fi -.SS Installation with snap -.SS Quickstart -.IP \[bu] 2 -install Snapd on your distro using the instructions below -.IP \[bu] 2 -sudo snap install rclone \-\-classic -.IP \[bu] 2 -Run \f[C]rclone\ config\f[] to setup. -See rclone config docs (https://rclone.org/docs/) for more details. -.PP -See below for how to install snapd if it isn\[aq]t already installed -.SS Arch -.IP -.nf -\f[C] -sudo\ pacman\ \-S\ snapd -\f[] -.fi -.PP -enable the snapd systemd service: -.IP -.nf -\f[C] -sudo\ systemctl\ enable\ \-\-now\ snapd.socket -\f[] -.fi -.SS Debian / Ubuntu -.IP -.nf -\f[C] -sudo\ apt\ install\ snapd -\f[] -.fi -.SS Fedora -.IP -.nf -\f[C] -sudo\ dnf\ copr\ enable\ zyga/snapcore -sudo\ dnf\ install\ snapd -\f[] -.fi -.PP -enable the snapd systemd service: -.IP -.nf -\f[C] -sudo\ systemctl\ enable\ \-\-now\ snapd.service -\f[] -.fi -.PP -SELinux support is in beta, so currently: -.IP -.nf -\f[C] -sudo\ setenforce\ 0 -\f[] -.fi -.PP -to persist, edit \f[C]/etc/selinux/config\f[] to set -\f[C]SELINUX=permissive\f[] and reboot. -.SS Gentoo -.PP -Install the gentoo\-snappy -overlay (https://github.com/zyga/gentoo-snappy). -.SS OpenEmbedded/Yocto -.PP -Install the snap meta -layer (https://github.com/morphis/meta-snappy/blob/master/README.md). -.SS openSUSE -.IP -.nf -\f[C] -sudo\ zypper\ addrepo\ https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/\ snappy -sudo\ zypper\ install\ snapd -\f[] -.fi -.SS OpenWrt -.PP -Enable the snap\-openwrt feed. .SS Configure .PP First, you\[aq]ll need to configure rclone. @@ -310,36 +257,42 @@ rclone\ config .PP See the following for detailed instructions for .IP \[bu] 2 -Google Drive (https://rclone.org/drive/) +Amazon Drive (https://rclone.org/amazonclouddrive/) .IP \[bu] 2 Amazon S3 (https://rclone.org/s3/) .IP \[bu] 2 -Swift / Rackspace Cloudfiles / Memset -Memstore (https://rclone.org/swift/) +Backblaze B2 (https://rclone.org/b2/) +.IP \[bu] 2 +Box (https://rclone.org/box/) +.IP \[bu] 2 +Crypt (https://rclone.org/crypt/) \- to encrypt other remotes .IP \[bu] 2 Dropbox (https://rclone.org/dropbox/) .IP \[bu] 2 +FTP (https://rclone.org/ftp/) +.IP \[bu] 2 Google Cloud Storage (https://rclone.org/googlecloudstorage/) .IP \[bu] 2 -Local filesystem (https://rclone.org/local/) -.IP \[bu] 2 -Amazon Drive (https://rclone.org/amazonclouddrive/) -.IP \[bu] 2 -Backblaze B2 (https://rclone.org/b2/) -.IP \[bu] 2 -Hubic (https://rclone.org/hubic/) -.IP \[bu] 2 -Microsoft OneDrive (https://rclone.org/onedrive/) -.IP \[bu] 2 -Yandex Disk (https://rclone.org/yandex/) -.IP \[bu] 2 -SFTP (https://rclone.org/sftp/) -.IP \[bu] 2 -FTP (https://rclone.org/ftp/) +Google Drive (https://rclone.org/drive/) .IP \[bu] 2 HTTP (https://rclone.org/http/) .IP \[bu] 2 -Crypt (https://rclone.org/crypt/) \- to encrypt other remotes +Hubic (https://rclone.org/hubic/) +.IP \[bu] 2 +Microsoft Azure Blob Storage (https://rclone.org/azureblob/) +.IP \[bu] 2 +Microsoft OneDrive (https://rclone.org/onedrive/) +.IP \[bu] 2 +Openstack Swift / Rackspace Cloudfiles / Memset +Memstore (https://rclone.org/swift/) +.IP \[bu] 2 +QingStor (https://rclone.org/qingstor/) +.IP \[bu] 2 +SFTP (https://rclone.org/sftp/) +.IP \[bu] 2 +Yandex Disk (https://rclone.org/yandex/) +.IP \[bu] 2 +The local filesystem (https://rclone.org/local/) .SS Usage .PP Rclone syncs a directory tree from one storage system to another. @@ -374,11 +327,29 @@ rclone\ sync\ /local/path\ remote:path\ #\ syncs\ /local/path\ to\ the\ remote Enter an interactive configuration session. .SS Synopsis .PP -Enter an interactive configuration session. +\f[C]rclone\ config\f[] enters an interactive configuration sessions +where you can setup new remotes and manage existing ones. +You may also set or remove a password to protect your configuration. +.PP +Additional functions: +.IP \[bu] 2 +\f[C]rclone\ config\ edit\f[] \[en] same as above +.IP \[bu] 2 +\f[C]rclone\ config\ file\f[] \[en] show path of configuration file in +use +.IP \[bu] 2 +\f[C]rclone\ config\ show\f[] \[en] print (decrypted) config file .IP .nf \f[C] -rclone\ config +rclone\ config\ [function]\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ config \f[] .fi .SS rclone copy @@ -444,7 +415,14 @@ lists the destination directory or not. .IP .nf \f[C] -rclone\ copy\ source:path\ dest:path +rclone\ copy\ source:path\ dest:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ copy \f[] .fi .SS rclone sync @@ -475,7 +453,14 @@ contents go there. .IP .nf \f[C] -rclone\ sync\ source:path\ dest:path +rclone\ sync\ source:path\ dest:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ sync \f[] .fi .SS rclone move @@ -502,7 +487,14 @@ original (if no errors on copy) in \f[C]source:path\f[]. .IP .nf \f[C] -rclone\ move\ source:path\ dest:path +rclone\ move\ source:path\ dest:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ move \f[] .fi .SS rclone delete @@ -538,7 +530,14 @@ delete all files bigger than 100MBytes. .IP .nf \f[C] -rclone\ delete\ remote:path +rclone\ delete\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ delete \f[] .fi .SS rclone purge @@ -553,7 +552,14 @@ Use \f[C]delete\f[] if you want to selectively delete files. .IP .nf \f[C] -rclone\ purge\ remote:path +rclone\ purge\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ purge \f[] .fi .SS rclone mkdir @@ -565,7 +571,14 @@ Make the path if it doesn\[aq]t already exist. .IP .nf \f[C] -rclone\ mkdir\ remote:path +rclone\ mkdir\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ mkdir \f[] .fi .SS rclone rmdir @@ -579,7 +592,14 @@ that. .IP .nf \f[C] -rclone\ rmdir\ remote:path +rclone\ rmdir\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ rmdir \f[] .fi .SS rclone check @@ -611,6 +631,7 @@ rclone\ check\ source:path\ dest:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-download\ \ \ Check\ by\ downloading\ rather\ than\ with\ hash. +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ help\ for\ check \f[] .fi .SS rclone ls @@ -622,7 +643,14 @@ List all the objects in the path with size and path. .IP .nf \f[C] -rclone\ ls\ remote:path +rclone\ ls\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ ls \f[] .fi .SS rclone lsd @@ -634,7 +662,14 @@ List all directories/containers/buckets in the path. .IP .nf \f[C] -rclone\ lsd\ remote:path +rclone\ lsd\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ lsd \f[] .fi .SS rclone lsl @@ -646,7 +681,14 @@ List all the objects path with modification time, size and path. .IP .nf \f[C] -rclone\ lsl\ remote:path +rclone\ lsl\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ lsl \f[] .fi .SS rclone md5sum @@ -659,7 +701,14 @@ This is in the same format as the standard md5sum tool produces. .IP .nf \f[C] -rclone\ md5sum\ remote:path +rclone\ md5sum\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ md5sum \f[] .fi .SS rclone sha1sum @@ -672,7 +721,14 @@ This is in the same format as the standard sha1sum tool produces. .IP .nf \f[C] -rclone\ sha1sum\ remote:path +rclone\ sha1sum\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ sha1sum \f[] .fi .SS rclone size @@ -684,7 +740,14 @@ Prints the total size and number of objects in remote:path. .IP .nf \f[C] -rclone\ size\ remote:path +rclone\ size\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ size \f[] .fi .SS rclone version @@ -696,7 +759,14 @@ Show the version number. .IP .nf \f[C] -rclone\ version +rclone\ version\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ version \f[] .fi .SS rclone cleanup @@ -710,7 +780,14 @@ Not supported by all remotes. .IP .nf \f[C] -rclone\ cleanup\ remote:path +rclone\ cleanup\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ cleanup \f[] .fi .SS rclone dedupe @@ -718,10 +795,14 @@ rclone\ cleanup\ remote:path Interactively find duplicate files delete/rename them. .SS Synopsis .PP -By default \f[C]dedup\f[] interactively finds duplicate files and offers -to delete all but one or rename them to be different. +By default \f[C]dedupe\f[] interactively finds duplicate files and +offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names. .PP +In the first pass it will merge directories with the same name. +It will do this iteratively until all the identical directories have +been merged. +.PP The \f[C]dedupe\f[] command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the \f[C]dedupe\f[] command @@ -837,6 +918,7 @@ rclone\ dedupe\ [mode]\ remote:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-dedupe\-mode\ string\ \ \ Dedupe\ mode\ interactive|skip|first|newest|oldest|rename.\ (default\ "interactive") +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ dedupe \f[] .fi .SS rclone authorize @@ -850,7 +932,14 @@ browser \- use as instructed by rclone config. .IP .nf \f[C] -rclone\ authorize +rclone\ authorize\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ authorize \f[] .fi .SS rclone cat @@ -902,6 +991,7 @@ rclone\ cat\ remote:path\ [flags] \ \ \ \ \ \ \-\-count\ int\ \ \ \ Only\ print\ N\ characters.\ (default\ \-1) \ \ \ \ \ \ \-\-discard\ \ \ \ \ \ Discard\ the\ output\ instead\ of\ printing. \ \ \ \ \ \ \-\-head\ int\ \ \ \ \ Only\ print\ the\ first\ N\ characters. +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ help\ for\ cat \ \ \ \ \ \ \-\-offset\ int\ \ \ Start\ printing\ at\ offset\ N\ (or\ from\ end\ if\ \-ve). \ \ \ \ \ \ \-\-tail\ int\ \ \ \ \ Only\ print\ the\ last\ N\ characters. \f[] @@ -947,7 +1037,14 @@ It doesn\[aq]t delete files from the destination. .IP .nf \f[C] -rclone\ copyto\ source:path\ dest:path +rclone\ copyto\ source:path\ dest:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ copyto \f[] .fi .SS rclone cryptcheck @@ -988,7 +1085,43 @@ After it has run it will log the status of the encryptedremote:. .IP .nf \f[C] -rclone\ cryptcheck\ remote:path\ cryptedremote:path +rclone\ cryptcheck\ remote:path\ cryptedremote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ cryptcheck +\f[] +.fi +.SS rclone cryptdecode +.PP +Cryptdecode returns unencrypted file names. +.SS Synopsis +.PP +rclone cryptdecode returns unencrypted file names when provided with a +list of encrypted file names. +List limit is 10 items. +.PP +use it like this +.IP +.nf +\f[C] +rclone\ cryptdecode\ encryptedremote:\ encryptedfilename1\ encryptedfilename2 +\f[] +.fi +.IP +.nf +\f[C] +rclone\ cryptdecode\ encryptedremote:\ encryptedfilename\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ cryptdecode \f[] .fi .SS rclone dbhashsum @@ -1003,11 +1136,32 @@ The output is in the same format as md5sum and sha1sum. .IP .nf \f[C] -rclone\ dbhashsum\ remote:path +rclone\ dbhashsum\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ dbhashsum \f[] .fi .SS rclone genautocomplete .PP +Output completion script for a given shell. +.SS Synopsis +.PP +Generates a shell completion script for rclone. +Run with \-\-help to list the supported shells. +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ genautocomplete +\f[] +.fi +.SS rclone genautocomplete bash +.PP Output bash completion script for rclone. .SS Synopsis .PP @@ -1018,7 +1172,7 @@ need to be run with sudo or as root, eg .IP .nf \f[C] -sudo\ rclone\ genautocomplete +sudo\ rclone\ genautocomplete\ bash \f[] .fi .PP @@ -1035,7 +1189,53 @@ If you supply a command line argument the script will be written there. .IP .nf \f[C] -rclone\ genautocomplete\ [output_file] +rclone\ genautocomplete\ bash\ [output_file]\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ bash +\f[] +.fi +.SS rclone genautocomplete zsh +.PP +Output zsh completion script for rclone. +.SS Synopsis +.PP +Generates a zsh autocompletion script for rclone. +.PP +This writes to /usr/share/zsh/vendor\-completions/_rclone by default so +will probably need to be run with sudo or as root, eg +.IP +.nf +\f[C] +sudo\ rclone\ genautocomplete\ zsh +\f[] +.fi +.PP +Logout and login again to use the autocompletion scripts, or source them +directly +.IP +.nf +\f[C] +autoload\ \-U\ compinit\ &&\ compinit +\f[] +.fi +.PP +If you supply a command line argument the script will be written there. +.IP +.nf +\f[C] +rclone\ genautocomplete\ zsh\ [output_file]\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ zsh \f[] .fi .SS rclone gendocs @@ -1078,6 +1278,7 @@ rclone\ listremotes\ [flags] .IP .nf \f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ listremotes \ \ \-l,\ \-\-long\ \ \ Show\ the\ type\ as\ well\ as\ names. \f[] .fi @@ -1117,6 +1318,7 @@ rclone\ lsjson\ remote:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-hash\ \ \ \ \ \ \ \ \ Include\ hashes\ in\ the\ output\ (may\ take\ longer). +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ help\ for\ lsjson \ \ \ \ \ \ \-\-no\-modtime\ \ \ Don\[aq]t\ read\ the\ modification\ time\ (can\ speed\ things\ up). \ \ \-R,\ \-\-recursive\ \ \ \ Recurse\ into\ the\ listing. \f[] @@ -1166,6 +1368,32 @@ fusermount\ \-u\ /path/to/local/mount umount\ /path/to/local/mount \f[] .fi +.SS Installing on Windows +.PP +To run rclone mount on Windows, you will need to download and install +WinFsp (http://www.secfs.net/winfsp/). +.PP +WinFsp is an open source (https://github.com/billziss-gh/winfsp) Windows +File System Proxy which makes it easy to write user space file systems +for Windows. +It provides a FUSE emulation layer which rclone uses combination with +cgofuse (https://github.com/billziss-gh/cgofuse). +Both of these packages are by Bill Zissimopoulos who was very helpful +during the implementation of rclone mount for Windows. +.SS Windows caveats +.PP +Note that drives created as Administrator are not visible by other +accounts (including the account that was elevated as Administrator). +So if you start a Windows drive from an Administrative Command Prompt +and then try to access the same drive from Explorer (which does not run +as Administrator), you will not be able to see the new drive. +.PP +The easiest way around this is to start the drive from a normal command +prompt. +It is also possible to start a drive from the SYSTEM account (using the +WinFsp.Launcher +infrastructure (https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)) +which creates drives accessible for everyone on the system. .SS Limitations .PP This can only write files seqentially, it can only seek when reading. @@ -1215,17 +1443,6 @@ like this: kill\ \-SIGHUP\ $(pidof\ rclone) \f[] .fi -.SS Bugs -.IP \[bu] 2 -All the remotes should work for read, but some may not for write -.RS 2 -.IP \[bu] 2 -those which need to know the size in advance won\[aq]t \- eg B2 -.IP \[bu] 2 -maybe should pass in size as \-1 to mean work it out -.IP \[bu] 2 -Or put in an an upload cache to cache the files on disk first -.RE .IP .nf \f[C] @@ -1244,6 +1461,7 @@ rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags] \ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) \ \ \ \ \ \ \-\-fuse\-flag\ stringArray\ \ \ \ \ Flags\ or\ arguments\ to\ be\ passed\ direct\ to\ libfuse/WinFsp.\ Repeat\ if\ required. \ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502) +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ mount \ \ \ \ \ \ \-\-max\-read\-ahead\ int\ \ \ \ \ \ \ \ The\ number\ of\ bytes\ that\ can\ be\ prefetched\ for\ sequential\ reads.\ (default\ 128k) \ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download. \ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up). @@ -1300,7 +1518,14 @@ src will be deleted on successful transfer. .IP .nf \f[C] -rclone\ moveto\ source:path\ dest:path +rclone\ moveto\ source:path\ dest:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ moveto \f[] .fi .SS rclone ncdu @@ -1340,7 +1565,14 @@ files, but is useful as it stands. .IP .nf \f[C] -rclone\ ncdu\ remote:path +rclone\ ncdu\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ ncdu \f[] .fi .SS rclone obscure @@ -1352,7 +1584,62 @@ Obscure password for use in the rclone.conf .IP .nf \f[C] -rclone\ obscure\ password +rclone\ obscure\ password\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ obscure +\f[] +.fi +.SS rclone rcat +.PP +Copies standard input to file on remote. +.SS Synopsis +.PP +rclone rcat reads from standard input (stdin) and copies it to a single +remote file. +.IP +.nf +\f[C] +echo\ "hello\ world"\ |\ rclone\ rcat\ remote:path/to/file +ffmpeg\ \-\ |\ rclone\ rcat\ \-\-checksum\ remote:path/to/file +\f[] +.fi +.PP +If the remote file already exists, it will be overwritten. +.PP +rcat will try to upload small files in a single request, which is +usually more efficient than the streaming/chunked upload endpoints, +which use multiple requests. +Exact behaviour depends on the remote. +What is considered a small file may be set through +\f[C]\-\-streaming\-upload\-cutoff\f[]. +Uploading only starts after the cutoff is reached or if the file ends +before that. +The data must fit into RAM. +The cutoff needs to be small enough to adhere the limits of your remote, +please see there. +Generally speaking, setting this cutoff too high will decrease your +performance. +.PP +Note that the upload can also not be retried because the data is not +kept around until the upload succeeds. +If you need to transfer a lot of data, you\[aq]re better off caching +locally and then \f[C]rclone\ move\f[] it to the destination. +.IP +.nf +\f[C] +rclone\ rcat\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ rcat \f[] .fi .SS rclone rmdirs @@ -1369,7 +1656,80 @@ empty directories in. .IP .nf \f[C] -rclone\ rmdirs\ remote:path +rclone\ rmdirs\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ rmdirs +\f[] +.fi +.SS rclone tree +.PP +List the contents of the remote in a tree like fashion. +.SS Synopsis +.PP +rclone tree lists the contents of a remote in a similar way to the unix +tree command. +.PP +For example +.IP +.nf +\f[C] +$\ rclone\ tree\ remote:path +/ +├──\ file1 +├──\ file2 +├──\ file3 +└──\ subdir +\ \ \ \ ├──\ file4 +\ \ \ \ └──\ file5 + +1\ directories,\ 5\ files +\f[] +.fi +.PP +You can use any of the filtering options with the tree command (eg +\-\-include and \-\-exclude). +You can also use \-\-fast\-list. +.PP +The tree command has many options for controlling the listing which are +compatible with the tree command. +Note that not all of them have short options as they conflict with +rclone\[aq]s short options. +.IP +.nf +\f[C] +rclone\ tree\ remote:path\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-a,\ \-\-all\ \ \ \ \ \ \ \ \ \ \ \ \ All\ files\ are\ listed\ (list\ .\ files\ too). +\ \ \-C,\ \-\-color\ \ \ \ \ \ \ \ \ \ \ Turn\ colorization\ on\ always. +\ \ \-d,\ \-\-dirs\-only\ \ \ \ \ \ \ List\ directories\ only. +\ \ \ \ \ \ \-\-dirsfirst\ \ \ \ \ \ \ List\ directories\ before\ files\ (\-U\ disables). +\ \ \ \ \ \ \-\-full\-path\ \ \ \ \ \ \ Print\ the\ full\ path\ prefix\ for\ each\ file. +\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ help\ for\ tree +\ \ \ \ \ \ \-\-human\ \ \ \ \ \ \ \ \ \ \ Print\ the\ size\ in\ a\ more\ human\ readable\ way. +\ \ \ \ \ \ \-\-level\ int\ \ \ \ \ \ \ Descend\ only\ level\ directories\ deep. +\ \ \-D,\ \-\-modtime\ \ \ \ \ \ \ \ \ Print\ the\ date\ of\ last\ modification. +\ \ \-i,\ \-\-noindent\ \ \ \ \ \ \ \ Don\[aq]t\ print\ indentation\ lines. +\ \ \ \ \ \ \-\-noreport\ \ \ \ \ \ \ \ Turn\ off\ file/directory\ count\ at\ end\ of\ tree\ listing. +\ \ \-o,\ \-\-output\ string\ \ \ Output\ to\ file\ instead\ of\ stdout. +\ \ \-p,\ \-\-protections\ \ \ \ \ Print\ the\ protections\ for\ each\ file. +\ \ \-Q,\ \-\-quote\ \ \ \ \ \ \ \ \ \ \ Quote\ filenames\ with\ double\ quotes. +\ \ \-s,\ \-\-size\ \ \ \ \ \ \ \ \ \ \ \ Print\ the\ size\ in\ bytes\ of\ each\ file. +\ \ \ \ \ \ \-\-sort\ string\ \ \ \ \ Select\ sort:\ name,version,size,mtime,ctime. +\ \ \ \ \ \ \-\-sort\-ctime\ \ \ \ \ \ Sort\ files\ by\ last\ status\ change\ time. +\ \ \-t,\ \-\-sort\-modtime\ \ \ \ Sort\ files\ by\ last\ modification\ time. +\ \ \-r,\ \-\-sort\-reverse\ \ \ \ Reverse\ the\ order\ of\ the\ sort. +\ \ \-U,\ \-\-unsorted\ \ \ \ \ \ \ \ Leave\ files\ unsorted. +\ \ \ \ \ \ \-\-version\ \ \ \ \ \ \ \ \ Sort\ files\ alphanumerically\ by\ version. \f[] .fi .SS Copying single files @@ -1573,6 +1933,13 @@ If running rclone from a script you might want to use today\[aq]s date as the directory name passed to \f[C]\-\-backup\-dir\f[] to store the old files, or you might want to pass \f[C]\-\-suffix\f[] with today\[aq]s date. +.SS \-\-bind string +.PP +Local address to bind to for outgoing connections. +This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or +host name. +If the host name doesn\[aq]t resolve or resoves to more than one IP +address it will give an error. .SS \-\-bwlimit=BANDWIDTH_SPEC .PP This option controls the bandwidth limit. @@ -1695,6 +2062,34 @@ One of \f[C]interactive\f[], \f[C]skip\f[], \f[C]first\f[], The default is \f[C]interactive\f[]. See the dedupe command for more information as to what these options mean. +.SS \-\-disable FEATURE,FEATURE,... +.PP +This disables a comma separated list of optional features. +For example to disable server side move and server side copy use: +.IP +.nf +\f[C] +\-\-disable\ move,copy +\f[] +.fi +.PP +The features can be put in in any case. +.PP +To see a list of which features can be disabled use: +.IP +.nf +\f[C] +\-\-disable\ help +\f[] +.fi +.PP +See the overview features (/overview/#features) and optional +features (/overview/#optional-features) to get an idea of which feature +does what. +.PP +This flag can be useful for debugging and in exceptional circumstances +(eg Google Drive limiting the total volume of Server Side Copies to +100GB/day). .SS \-n, \-\-dry\-run .PP Do a trial run with no permanent changes. @@ -1740,6 +2135,29 @@ regardless of the state of files on the destination. Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using \f[C]\-\-checksum\f[]). +.SS \-\-immutable +.PP +Treat source and destination files as immutable and disallow +modification. +.PP +With this option set, files will be created and deleted as requested, +but existing files will never be updated. +If an existing file does not match between the source and destination, +rclone will give the error +\f[C]Source\ and\ destination\ exist\ but\ do\ not\ match:\ immutable\ file\ modified\f[]. +.PP +Note that only commands which transfer files (e.g. +\f[C]sync\f[], \f[C]copy\f[], \f[C]move\f[]) are affected by this +behavior, and only modification is disallowed. +Files may still be deleted explicitly (e.g. +\f[C]delete\f[], \f[C]purge\f[]) or implicitly (e.g. +\f[C]sync\f[], \f[C]move\f[]). +Use \f[C]copy\ \-\-immutable\f[] if it is desired to avoid deletion as +well as modification. +.PP +This can be useful as an additional layer of protection for immutable or +append\-only data sets (notably backup archives), where modification +implies corruption and should not be propagated. .SS \-\-log\-file=FILE .PP Log all of rclone\[aq]s output to FILE. @@ -2207,6 +2625,9 @@ Useful for debugging only. Dump HTTP headers and bodies \- may contain sensitive info. Can be very verbose. Useful for debugging only. +.PP +Note that the bodies are buffered in memory so don\[aq]t use this for +enormous files. .SS \-\-dump\-filters .PP Dump the filters to the output. @@ -2834,11 +3255,13 @@ Prepare a file like this \f[C]filter\-file.txt\f[] .IP .nf \f[C] -#\ a\ sample\ exclude\ rule\ file +#\ a\ sample\ filter\ rule\ file \-\ secret*.jpg +\ *.jpg +\ *.png +\ file2.avi +\-\ /dir/Trash/** ++\ /dir/** #\ exclude\ everything\ else \-\ * \f[] @@ -2850,6 +3273,8 @@ The rules are processed in the order that they are defined. This example will include all \f[C]jpg\f[] and \f[C]png\f[] files, exclude any files matching \f[C]secret*.jpg\f[] and include \f[C]file2.avi\f[]. +It will also include everything in the directory \f[C]dir\f[] at the +root of the sync, except \f[C]dir/Trash\f[] which it will exclude. Everything else will be excluded from the sync. .SS \f[C]\-\-files\-from\f[] \- Read list of source\-file names .PP @@ -3059,71 +3484,6 @@ MIME Type T} _ T{ -Google Drive -T}@T{ -MD5 -T}@T{ -Yes -T}@T{ -No -T}@T{ -Yes -T}@T{ -R/W -T} -T{ -Amazon S3 -T}@T{ -MD5 -T}@T{ -Yes -T}@T{ -No -T}@T{ -No -T}@T{ -R/W -T} -T{ -Openstack Swift -T}@T{ -MD5 -T}@T{ -Yes -T}@T{ -No -T}@T{ -No -T}@T{ -R/W -T} -T{ -Dropbox -T}@T{ -DBHASH † -T}@T{ -Yes -T}@T{ -Yes -T}@T{ -No -T}@T{ -\- -T} -T{ -Google Cloud Storage -T}@T{ -MD5 -T}@T{ -Yes -T}@T{ -No -T}@T{ -No -T}@T{ -R/W -T} -T{ Amazon Drive T}@T{ MD5 @@ -3137,20 +3497,7 @@ T}@T{ R T} T{ -Microsoft OneDrive -T}@T{ -SHA1 -T}@T{ -Yes -T}@T{ -Yes -T}@T{ -No -T}@T{ -R -T} -T{ -Hubic +Amazon S3 T}@T{ MD5 T}@T{ @@ -3176,26 +3523,26 @@ T}@T{ R/W T} T{ -Yandex Disk +Box T}@T{ -MD5 +SHA1 +T}@T{ +Yes T}@T{ Yes T}@T{ No T}@T{ -No -T}@T{ -R/W -T} -T{ -SFTP -T}@T{ \- +T} +T{ +Dropbox +T}@T{ +DBHASH † T}@T{ Yes T}@T{ -Depends +Yes T}@T{ No T}@T{ @@ -3208,19 +3555,84 @@ T}@T{ T}@T{ No T}@T{ -Yes +No T}@T{ No T}@T{ \- T} T{ +Google Cloud Storage +T}@T{ +MD5 +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +R/W +T} +T{ +Google Drive +T}@T{ +MD5 +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +R/W +T} +T{ HTTP T}@T{ \- T}@T{ No T}@T{ +No +T}@T{ +No +T}@T{ +R +T} +T{ +Hubic +T}@T{ +MD5 +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +R/W +T} +T{ +Microsoft Azure Blob Storage +T}@T{ +MD5 +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +R/W +T} +T{ +Microsoft OneDrive +T}@T{ +SHA1 +T}@T{ +Yes +T}@T{ Yes T}@T{ No @@ -3228,6 +3640,58 @@ T}@T{ R T} T{ +Openstack Swift +T}@T{ +MD5 +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +R/W +T} +T{ +QingStor +T}@T{ +MD5 +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +R/W +T} +T{ +SFTP +T}@T{ +MD5, SHA1 ‡ +T}@T{ +Yes +T}@T{ +Depends +T}@T{ +No +T}@T{ +\- +T} +T{ +Yandex Disk +T}@T{ +MD5 +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +R/W +T} +T{ The local filesystem T}@T{ All @@ -3244,19 +3708,20 @@ T} .SS Hash .PP The cloud storage system supports various hash types of the objects. -.PD 0 -.P -.PD The hashes are used when transferring data as an integrity check and can be specifically used with the \f[C]\-\-checksum\f[] flag in syncs and in the \f[C]check\f[] command. .PP -To use the checksum checks between filesystems they must support a -common hash type. +To use the verify checksums when transferring between cloud storage +systems they must support a common hash type. .PP † Note that Dropbox supports its own custom hash (https://www.dropbox.com/developers/reference/content-hash). This is an SHA256 sum of all the 4MB block SHA256s. +.PP +‡ SFTP supports checksums if the same login has shell access and +\f[C]md5sum\f[] or \f[C]sha1sum\f[] as well as \f[C]echo\f[] are in the +remote\[aq]s PATH. .SS ModTime .PP The cloud storage system supports setting modification times on objects. @@ -3326,7 +3791,7 @@ more efficient. .PP .TS tab(@); -l c c c c c c. +l c c c c c c c. T{ Name T}@T{ @@ -3341,14 +3806,16 @@ T}@T{ CleanUp T}@T{ ListR +T}@T{ +StreamUpload T} _ T{ -Google Drive +Amazon Drive T}@T{ Yes T}@T{ -Yes +No T}@T{ Yes T}@T{ @@ -3357,6 +3824,8 @@ T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) T}@T{ No +T}@T{ +No T} T{ Amazon S3 @@ -3372,17 +3841,38 @@ T}@T{ No T}@T{ Yes +T}@T{ +Yes T} T{ -Openstack Swift +Backblaze B2 T}@T{ -Yes † +No +T}@T{ +No +T}@T{ +No +T}@T{ +No T}@T{ Yes T}@T{ -No +Yes T}@T{ -No +Yes +T} +T{ +Box +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No #575 (https://github.com/ncw/rclone/issues/575) T}@T{ No T}@T{ @@ -3402,6 +3892,25 @@ T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) T}@T{ No +T}@T{ +Yes +T} +T{ +FTP +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +Yes T} T{ Google Cloud Storage @@ -3417,34 +3926,40 @@ T}@T{ No T}@T{ Yes +T}@T{ +Yes T} T{ -Amazon Drive +Google Drive +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes T}@T{ Yes T}@T{ No T}@T{ Yes -T}@T{ -Yes -T}@T{ -No #575 (https://github.com/ncw/rclone/issues/575) -T}@T{ -No T} T{ -Microsoft OneDrive +HTTP T}@T{ -Yes +No T}@T{ -Yes +No T}@T{ -Yes +No T}@T{ -No #197 (https://github.com/ncw/rclone/issues/197) +No T}@T{ -No #575 (https://github.com/ncw/rclone/issues/575) +No +T}@T{ +No T}@T{ No T} @@ -3462,24 +3977,13 @@ T}@T{ No T}@T{ Yes -T} -T{ -Backblaze B2 -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -Yes T}@T{ Yes T} T{ -Yandex Disk +Microsoft Azure Blob Storage +T}@T{ +Yes T}@T{ Yes T}@T{ @@ -3489,9 +3993,60 @@ No T}@T{ No T}@T{ +Yes +T}@T{ +No +T} +T{ +Microsoft OneDrive +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No #197 (https://github.com/ncw/rclone/issues/197) +T}@T{ No #575 (https://github.com/ncw/rclone/issues/575) T}@T{ +No +T}@T{ +No +T} +T{ +Openstack Swift +T}@T{ +Yes † +T}@T{ Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T} +T{ +QingStor +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +No T} T{ SFTP @@ -3507,9 +4062,15 @@ T}@T{ No T}@T{ No +T}@T{ +Yes T} T{ -FTP +Yandex Disk +T}@T{ +Yes +T}@T{ +No T}@T{ No T}@T{ @@ -3519,24 +4080,7 @@ Yes T}@T{ Yes T}@T{ -No -T}@T{ -No -T} -T{ -HTTP -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No +Yes T} T{ The local filesystem @@ -3552,6 +4096,8 @@ T}@T{ No T}@T{ No +T}@T{ +Yes T} .TE .SS Purge @@ -3603,17 +4149,45 @@ The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the \f[C]\-\-fast\-list\f[] flag to work. See the rclone docs (/docs/#fast-list) for more details. -.SS Google Drive +.SS StreamUpload .PP -Paths are specified as \f[C]drive:path\f[] +Some remotes allow files to be uploaded without knowing the file size in +advance. +This allows certain operations to work without spooling the file to +local disk first, e.g. +\f[C]rclone\ rcat\f[]. +.SS Amazon Drive .PP -Drive paths may be as deep as required, eg -\f[C]drive:directory/subdirectory\f[]. +Paths are specified as \f[C]remote:path\f[] .PP -The initial setup for drive involves getting a token from Google drive +Paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. \f[C]rclone\ config\f[] walks you through it. .PP +The configuration process for Amazon Drive may involve using an oauth +proxy (https://github.com/ncw/oauthproxy). +This is used to keep the Amazon credentials out of the source code. +The proxy runs in Google\[aq]s very secure App Engine environment and +doesn\[aq]t store any credentials which pass through it. +.PP +\f[B]NB\f[] rclone doesn\[aq]t not currently have its own Amazon Drive +credentials (see the +forum (https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/) +for why) so you will either need to have your own \f[C]client_id\f[] and +\f[C]client_secret\f[] with Amazon Drive, or use a a third party ouath +proxy in which case you will need to enter \f[C]client_id\f[], +\f[C]client_secret\f[], \f[C]auth_url\f[] and \f[C]token_url\f[]. +.PP +Note also if you are not using Amazon\[aq]s \f[C]auth_url\f[] and +\f[C]token_url\f[], (ie you filled in something for those) then if +setting up on a remote machine you can only use the copying the config +method of +configuration (https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) +\- \f[C]rclone\ authorize\f[] will not work. +.PP Here is an example of how to make a remote called \f[C]remote\f[]. First run: .IP @@ -3665,15 +4239,20 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "sftp" 14\ /\ Yandex\ Disk \ \ \ \\\ "yandex" -Storage>\ 8 -Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally. -client_id>\ -Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally. -client_secret>\ +Storage>\ 1 +Amazon\ Application\ Client\ Id\ \-\ required. +client_id>\ your\ client\ ID\ goes\ here +Amazon\ Application\ Client\ Secret\ \-\ required. +client_secret>\ your\ client\ secret\ goes\ here +Auth\ server\ URL\ \-\ leave\ blank\ to\ use\ Amazon\[aq]s. +auth_url>\ Optional\ auth\ URL +Token\ server\ url\ \-\ leave\ blank\ to\ use\ Amazon\[aq]s. +token_url>\ Optional\ token\ URL Remote\ config +Make\ sure\ your\ Redirect\ URL\ is\ set\ to\ "http://127.0.0.1:53682/"\ in\ your\ custom\ config. Use\ auto\ config? \ *\ Say\ Y\ if\ not\ sure -\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine\ or\ Y\ didn\[aq]t\ work +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine y)\ Yes n)\ No y/n>\ y @@ -3681,15 +4260,13 @@ If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ lin Log\ in\ and\ authorize\ rclone\ for\ access Waiting\ for\ code... Got\ code -Configure\ this\ as\ a\ team\ drive? -y)\ Yes -n)\ No -y/n>\ n \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [remote] -client_id\ =\ -client_secret\ =\ -token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null} +client_id\ =\ your\ client\ ID\ goes\ here +client_secret\ =\ your\ client\ secret\ goes\ here +auth_url\ =\ Optional\ auth\ URL +token_url\ =\ Optional\ token\ URL +token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015\-09\-06T16:07:39.658438471+01:00"} \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote @@ -3698,17 +4275,19 @@ y/e/d>\ y \f[] .fi .PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if you use auto config mode. +token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you -to unblock it temporarily if you are running a host firewall, or use -manual mode. +to unblock it temporarily if you are running a host firewall. .PP -You can then use it like this, +Once configured you can then use \f[C]rclone\f[] like this, .PP -List directories in top level of your drive +List directories in top level of your Amazon Drive .IP .nf \f[C] @@ -3716,7 +4295,7 @@ rclone\ lsd\ remote: \f[] .fi .PP -List all the files in your drive +List all the files in your Amazon Drive .IP .nf \f[C] @@ -3724,375 +4303,94 @@ rclone\ ls\ remote: \f[] .fi .PP -To copy a local directory to a drive directory called backup +To copy a local directory to an Amazon Drive directory called backup .IP .nf \f[C] rclone\ copy\ /home/source\ remote:backup \f[] .fi -.SS Team drives +.SS Modified time and MD5SUMs .PP -If you want to configure the remote to point to a Google Team Drive then -answer \f[C]y\f[] to the question -\f[C]Configure\ this\ as\ a\ team\ drive?\f[]. +Amazon Drive doesn\[aq]t allow modification times to be changed via the +API so these won\[aq]t be accurate or used for syncing. .PP -This will fetch the list of Team Drives from google and allow you to -configure which one you want to use. -You can also type in a team drive ID if you prefer. -.PP -For example: -.IP -.nf -\f[C] -Configure\ this\ as\ a\ team\ drive? -y)\ Yes -n)\ No -y/n>\ y -Fetching\ team\ drive\ list... -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Rclone\ Test -\ \ \ \\\ "xxxxxxxxxxxxxxxxxxxx" -\ 2\ /\ Rclone\ Test\ 2 -\ \ \ \\\ "yyyyyyyyyyyyyyyyyyyy" -\ 3\ /\ Rclone\ Test\ 3 -\ \ \ \\\ "zzzzzzzzzzzzzzzzzzzz" -Enter\ a\ Team\ Drive\ ID>\ 1 -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -client_id\ =\ -client_secret\ =\ -token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null} -team_drive\ =\ xxxxxxxxxxxxxxxxxxxx -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.SS Modified time -.PP -Google drive stores modification times accurate to 1 ms. -.SS Revisions -.PP -Google drive stores revisions of files. -When you upload a change to an existing file to google drive using -rclone it will create a new revision of that file. -.PP -Revisions follow the standard google policy which at time of writing was -.IP \[bu] 2 -They are deleted after 30 days or 100 revisions (whatever comes first). -.IP \[bu] 2 -They do not count towards a user storage quota. +It does store MD5SUMs so for a more accurate sync, you can use the +\f[C]\-\-checksum\f[] flag. .SS Deleting files .PP -By default rclone will delete files permanently when requested. -If sending them to the trash is required instead then use the -\f[C]\-\-drive\-use\-trash\f[] flag. +Any files you delete with rclone will end up in the trash. +Amazon don\[aq]t provide an API to permanently delete files, nor to +empty the trash, so you will have to do that with one of Amazon\[aq]s +apps or via the Amazon Drive website. +As of November 17, 2016, files are automatically deleted by Amazon from +the trash after 30 days. +.SS Using with non \f[C]\&.com\f[] Amazon accounts +.PP +Let\[aq]s say you usually use \f[C]amazon.co.uk\f[]. +When you authenticate with rclone it will take you to an +\f[C]amazon.com\f[] page to log in. +Your \f[C]amazon.co.uk\f[] email and password should work here just +fine. .SS Specific options .PP Here are the command line options specific to this cloud storage system. -.SS \-\-drive\-auth\-owner\-only +.SS \-\-acd\-templink\-threshold=SIZE .PP -Only consider files owned by the authenticated user. -.SS \-\-drive\-chunk\-size=SIZE +Files this size or more will be downloaded via their \f[C]tempLink\f[]. +This is to work around a problem with Amazon Drive which blocks +downloads of files bigger than about 10GB. +The default for this is 9GB which shouldn\[aq]t need to be changed. .PP -Upload chunk size. -Must a power of 2 >= 256k. -Default value is 8 MB. +To download files above this threshold, rclone requests a +\f[C]tempLink\f[] which downloads the file through a temporary URL +directly from the underlying S3 storage. +.SS \-\-acd\-upload\-wait\-per\-gb=TIME .PP -Making this larger will improve performance, but note that each chunk is -buffered in memory one per transfer. +Sometimes Amazon Drive gives an error when a file has been fully +uploaded but the file appears anyway after a little while. +This happens sometimes for files over 1GB in size and nearly every time +for files bigger than 10GB. +This parameter controls the time rclone waits for the file to appear. .PP -Reducing this will reduce memory usage but decrease performance. -.SS \-\-drive\-auth\-owner\-only +The default value for this parameter is 3 minutes per GB, so by default +it will wait 3 minutes for every GB uploaded to see if the file appears. .PP -Only consider files owned by the authenticated user. -.SS \-\-drive\-formats +You can disable this feature by setting it to 0. +This may cause conflict errors as rclone retries the failed upload but +the file will most likely appear correctly eventually. .PP -Google documents can only be exported from Google drive. -When rclone downloads a Google doc it chooses a format to download -depending upon this setting. +These values were determined empirically by observing lots of uploads of +big files for a range of file sizes. .PP -By default the formats are \f[C]docx,xlsx,pptx,svg\f[] which are a -sensible default for an editable document. -.PP -When choosing a format, rclone runs down the list provided in order and -chooses the first file format the doc can be exported as from the list. -If the file can\[aq]t be exported to a format on the formats list, then -rclone will choose a format from the default list. -.PP -If you prefer an archive copy then you might use -\f[C]\-\-drive\-formats\ pdf\f[], or if you prefer -openoffice/libreoffice formats you might use -\f[C]\-\-drive\-formats\ ods,odt,odp\f[]. -.PP -Note that rclone adds the extension to the google doc, so if it is -calles \f[C]My\ Spreadsheet\f[] on google docs, it will be exported as -\f[C]My\ Spreadsheet.xlsx\f[] or \f[C]My\ Spreadsheet.pdf\f[] etc. -.PP -Here are the possible extensions with their corresponding mime types. -.PP -.TS -tab(@); -lw(9.7n) lw(11.7n) lw(12.6n). -T{ -Extension -T}@T{ -Mime Type -T}@T{ -Description -T} -_ -T{ -csv -T}@T{ -text/csv -T}@T{ -Standard CSV format for Spreadsheets -T} -T{ -doc -T}@T{ -application/msword -T}@T{ -Micosoft Office Document -T} -T{ -docx -T}@T{ -application/vnd.openxmlformats\-officedocument.wordprocessingml.document -T}@T{ -Microsoft Office Document -T} -T{ -epub -T}@T{ -application/epub+zip -T}@T{ -E\-book format -T} -T{ -html -T}@T{ -text/html -T}@T{ -An HTML Document -T} -T{ -jpg -T}@T{ -image/jpeg -T}@T{ -A JPEG Image File -T} -T{ -odp -T}@T{ -application/vnd.oasis.opendocument.presentation -T}@T{ -Openoffice Presentation -T} -T{ -ods -T}@T{ -application/vnd.oasis.opendocument.spreadsheet -T}@T{ -Openoffice Spreadsheet -T} -T{ -ods -T}@T{ -application/x\-vnd.oasis.opendocument.spreadsheet -T}@T{ -Openoffice Spreadsheet -T} -T{ -odt -T}@T{ -application/vnd.oasis.opendocument.text -T}@T{ -Openoffice Document -T} -T{ -pdf -T}@T{ -application/pdf -T}@T{ -Adobe PDF Format -T} -T{ -png -T}@T{ -image/png -T}@T{ -PNG Image Format -T} -T{ -pptx -T}@T{ -application/vnd.openxmlformats\-officedocument.presentationml.presentation -T}@T{ -Microsoft Office Powerpoint -T} -T{ -rtf -T}@T{ -application/rtf -T}@T{ -Rich Text Format -T} -T{ -svg -T}@T{ -image/svg+xml -T}@T{ -Scalable Vector Graphics Format -T} -T{ -tsv -T}@T{ -text/tab\-separated\-values -T}@T{ -Standard TSV format for spreadsheets -T} -T{ -txt -T}@T{ -text/plain -T}@T{ -Plain Text -T} -T{ -xls -T}@T{ -application/vnd.ms\-excel -T}@T{ -Microsoft Office Spreadsheet -T} -T{ -xlsx -T}@T{ -application/vnd.openxmlformats\-officedocument.spreadsheetml.sheet -T}@T{ -Microsoft Office Spreadsheet -T} -T{ -zip -T}@T{ -application/zip -T}@T{ -A ZIP file of HTML, Images CSS -T} -.TE -.SS \-\-drive\-list\-chunk int -.PP -Size of listing chunk 100\-1000. -0 to disable. -(default 1000) -.SS \-\-drive\-shared\-with\-me -.PP -Only show files that are shared with me -.SS \-\-drive\-skip\-gdocs -.PP -Skip google documents in all listings. -If given, gdocs practically become invisible to rclone. -.SS \-\-drive\-trashed\-only -.PP -Only show files that are in the trash. -This will show trashed files in their original directory structure. -.SS \-\-drive\-upload\-cutoff=SIZE -.PP -File size cutoff for switching to chunked upload. -Default is 8 MB. -.SS \-\-drive\-use\-trash -.PP -Send files to the trash instead of deleting permanently. -Defaults to off, namely deleting files permanently. +Upload with the \f[C]\-v\f[] flag to see more info about what rclone is +doing in this situation. .SS Limitations .PP -Drive has quite a lot of rate limiting. -This causes rclone to be limited to transferring about 2 files per -second only. -Individual files may be transferred much faster at 100s of MBytes/s but -lots of small files can take a long time. -.SS Duplicated files +Note that Amazon Drive is case insensitive so you can\[aq]t have a file +called "Hello.doc" and one called "hello.doc". .PP -Sometimes, for no reason I\[aq]ve been able to track down, drive will -duplicate a file that rclone uploads. -Drive unlike all the other remotes can have duplicated files. +Amazon Drive has rate limiting so you may notice errors in the sync (429 +errors). +rclone will automatically retry the sync up to 3 times by default (see +\f[C]\-\-retries\f[] flag) which should hopefully work around this +problem. .PP -Duplicated files cause problems with the syncing and you will see -messages in the log about duplicates. +Amazon Drive has an internal limit of file sizes that can be uploaded to +the service. +This limit is not officially published, but all files larger than this +will fail. .PP -Use \f[C]rclone\ dedupe\f[] to fix duplicated files. +At the time of writing (Jan 2016) is in the area of 50GB per file. +This means that larger files are likely to fail. .PP -Note that this isn\[aq]t just a problem with rclone, even Google Photos -on Android duplicates files on drive sometimes. -.SS Rclone appears to be re\-copying files it shouldn\[aq]t -.PP -There are two possible reasons for rclone to recopy files which -haven\[aq]t changed to Google Drive. -.PP -The first is the duplicated file issue above \- run -\f[C]rclone\ dedupe\f[] and check your logs for duplicate object or -directory messages. -.PP -The second is that sometimes Google reports different sizes for the -Google Docs exports which will cause rclone to re\-download Google Docs -for no apparent reason. -\f[C]\-\-ignore\-size\f[] is a not very satisfactory work\-around for -this if it is causing you a lot of problems. -.SS Google docs downloads sometimes fail with "Failed to copy: read X -bytes expecting Y" -.PP -This is the same problem as above. -Google reports the google doc is one size, but rclone downloads a -different size. -Work\-around with the \f[C]\-\-ignore\-size\f[] flag or wait for rclone -to retry the download which it will. -.SS Making your own client_id -.PP -When you use rclone with Google drive in its default configuration you -are using rclone\[aq]s client_id. -This is shared between all the rclone users. -There is a global rate limit on the number of queries per second that -each client_id can do set by Google. -rclone already has a high quota and I will continue to make sure it is -high enough by contacting Google. -.PP -However you might find you get better performance making your own -client_id if you are a heavy user. -Or you may not depending on exactly how Google have been raising -rclone\[aq]s rate limit. -.PP -Here is how to create your own Google Drive client ID for rclone: -.IP "1." 3 -Log into the Google API Console (https://console.developers.google.com/) -with your Google account. -It doesn\[aq]t matter what Google account you use. -(It need not be the same account as the Google Drive you want to access) -.IP "2." 3 -Select a project or create a new project. -.IP "3." 3 -Under Overview, Google APIs, Google Apps APIs, click "Drive API", then -"Enable". -.IP "4." 3 -Click "Credentials" in the left\-side panel (not "Go to credentials", -which opens the wizard), then "Create credentials", then "OAuth client -ID". -It will prompt you to set the OAuth consent screen product name, if you -haven\[aq]t set one already. -.IP "5." 3 -Choose an application type of "other", and click "Create". -(the default name is fine) -.IP "6." 3 -It will show you a client ID and client secret. -Use these values in rclone config to add a new remote or edit an -existing remote. -.PP -(Thanks to \@balazer on github for these instructions.) +Unfortunately there is no way for rclone to see that this failure is +because of file size, so it will retry the operation, as any other +failure. +To avoid this problem, use \f[C]\-\-max\-size\ 50000M\f[] option to +limit the maximum size of uploaded files. +Note that \f[C]\-\-max\-size\f[] does not split files into segments, it +only ignores files over this size. .SS Amazon S3 .PP Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for @@ -4353,7 +4651,8 @@ In order of precedence: Directly in the rclone configuration file (as configured by \f[C]rclone\ config\f[]) .IP \[bu] 2 -set \f[C]access_key_id\f[] and \f[C]secret_access_key\f[] +set \f[C]access_key_id\f[] and \f[C]secret_access_key\f[]. +\f[C]session_token\f[] can be optionally set when using AWS STS. .IP \[bu] 2 Runtime configuration: .IP \[bu] 2 @@ -4367,6 +4666,8 @@ Access Key ID: \f[C]AWS_ACCESS_KEY_ID\f[] or \f[C]AWS_ACCESS_KEY\f[] .IP \[bu] 2 Secret Access Key: \f[C]AWS_SECRET_ACCESS_KEY\f[] or \f[C]AWS_SECRET_KEY\f[] +.IP \[bu] 2 +Session Token: \f[C]AWS_SESSION_TOKEN\f[] .RE .IP \[bu] 2 Running \f[C]rclone\f[] on an EC2 instance with an IAM role @@ -4431,6 +4732,22 @@ For reference, here\[aq]s an Ansible script (https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with \f[C]rclone\ sync\f[]. +.SS Glacier +.PP +You can transition objects to glacier storage using a lifecycle +policy (http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). +The bucket can still be synced or copied into normally, but if rclone +tries to access the data you will see an error like below. +.IP +.nf +\f[C] +2017/09/11\ 19:07:43\ Failed\ to\ sync:\ failed\ to\ open\ source\ object:\ Object\ in\ GLACIER,\ restore\ first:\ path/to/file +\f[] +.fi +.PP +In this case you need to +restore (http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html) +the object(s) in question before using rclone. .SS Specific options .PP Here are the command line options specific to this cloud storage system. @@ -4622,30 +4939,16 @@ So once set up, for example to copy files into a bucket rclone\ copy\ /path/to/files\ minio:bucket \f[] .fi -.SS Swift +.SS Wasabi .PP -Swift refers to Openstack Object -Storage (https://www.openstack.org/software/openstack-storage/). -Commercial implementations of that being: -.IP \[bu] 2 -Rackspace Cloud Files (https://www.rackspace.com/cloud/files/) -.IP \[bu] 2 -Memset Memstore (https://www.memset.com/cloud/storage/) +Wasabi (https://wasabi.com) is a cloud\-based object storage service for +a broad range of applications and use cases. +Wasabi is designed for individuals and organizations that require a +high\-performance, reliable, and secure data storage infrastructure at +minimal cost. .PP -Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] -for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg -\f[C]remote:container/path/to/dir\f[]. -.PP -Here is an example of making a swift configuration. -First run -.IP -.nf -\f[C] -rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process. +Wasabi provides an S3 interface which can be configured for use with +rclone like this. .IP .nf \f[C] @@ -4653,503 +4956,81 @@ No\ remotes\ found\ \-\ make\ a\ new\ one n)\ New\ remote s)\ Set\ configuration\ password n/s>\ n -name>\ remote +name>\ wasabi Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Amazon\ Drive \ \ \ \\\ "amazon\ cloud\ drive" \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) \ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 11 -User\ name\ to\ log\ in. -user>\ user_name -API\ key\ or\ password. -key>\ password_or_api_key -Authentication\ URL\ for\ server. +[snip] +Storage>\ s3 +Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Rackspace\ US -\ \ \ \\\ "https://auth.api.rackspacecloud.com/v1.0" -\ 2\ /\ Rackspace\ UK -\ \ \ \\\ "https://lon.auth.api.rackspacecloud.com/v1.0" -\ 3\ /\ Rackspace\ v2 -\ \ \ \\\ "https://identity.api.rackspacecloud.com/v2.0" -\ 4\ /\ Memset\ Memstore\ UK -\ \ \ \\\ "https://auth.storage.memset.com/v1.0" -\ 5\ /\ Memset\ Memstore\ UK\ v2 -\ \ \ \\\ "https://auth.storage.memset.com/v2.0" -\ 6\ /\ OVH -\ \ \ \\\ "https://auth.cloud.ovh.net/v2.0" -auth>\ 1 -User\ domain\ \-\ optional\ (v3\ auth) -domain>\ Default -Tenant\ name\ \-\ optional\ for\ v1\ auth,\ required\ otherwise -tenant>\ tenant_name -Tenant\ domain\ \-\ optional\ (v3\ auth) -tenant_domain> -Region\ name\ \-\ optional -region> -Storage\ URL\ \-\ optional -storage_url> -AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version -auth_version> -Remote\ config -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -user\ =\ user_name -key\ =\ password_or_api_key -auth\ =\ https://auth.api.rackspacecloud.com/v1.0 -domain\ =\ Default -tenant\ = -tenant_domain\ = -region\ = -storage_url\ = -auth_version\ = -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -This remote is called \f[C]remote\f[] and can now be used like this -.PP -See all containers -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -Make a new container -.IP -.nf -\f[C] -rclone\ mkdir\ remote:container -\f[] -.fi -.PP -List the contents of a container -.IP -.nf -\f[C] -rclone\ ls\ remote:container -\f[] -.fi -.PP -Sync \f[C]/home/local/directory\f[] to the remote container, deleting -any excess files in the container. -.IP -.nf -\f[C] -rclone\ sync\ /home/local/directory\ remote:container -\f[] -.fi -.SS Configuration from an Openstack credentials file -.PP -An Opentstack credentials file typically looks something something like -this (without the comments) -.IP -.nf -\f[C] -export\ OS_AUTH_URL=https://a.provider.net/v2.0 -export\ OS_TENANT_ID=ffffffffffffffffffffffffffffffff -export\ OS_TENANT_NAME="1234567890123456" -export\ OS_USERNAME="123abc567xy" -echo\ "Please\ enter\ your\ OpenStack\ Password:\ " -read\ \-sr\ OS_PASSWORD_INPUT -export\ OS_PASSWORD=$OS_PASSWORD_INPUT -export\ OS_REGION_NAME="SBG1" -if\ [\ \-z\ "$OS_REGION_NAME"\ ];\ then\ unset\ OS_REGION_NAME;\ fi -\f[] -.fi -.PP -The config file needs to look something like this where -\f[C]$OS_USERNAME\f[] represents the value of the \f[C]OS_USERNAME\f[] -variable \- \f[C]123abc567xy\f[] in the example above. -.IP -.nf -\f[C] -[remote] -type\ =\ swift -user\ =\ $OS_USERNAME -key\ =\ $OS_PASSWORD -auth\ =\ $OS_AUTH_URL -tenant\ =\ $OS_TENANT_NAME -\f[] -.fi -.PP -Note that you may (or may not) need to set \f[C]region\f[] too \- try -without first. -.SS \-\-fast\-list -.PP -This remote supports \f[C]\-\-fast\-list\f[] which allows you to use -fewer transactions in exchange for more memory. -See the rclone docs (/docs/#fast-list) for more details. -.SS Specific options -.PP -Here are the command line options specific to this cloud storage system. -.SS \-\-swift\-chunk\-size=SIZE -.PP -Above this size files will be chunked into a _segments container. -The default for this is 5GB which is its maximum value. -.SS Modified time -.PP -The modified time is stored as metadata on the object as -\f[C]X\-Object\-Meta\-Mtime\f[] as floating point since the epoch -accurate to 1 ns. -.PP -This is a defacto standard (used in the official python\-swiftclient -amongst others) for storing the modification time for an object. -.SS Limitations -.PP -The Swift API doesn\[aq]t return a correct MD5SUM for segmented files -(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the -MD5SUM for these. -.SS Troubleshooting -.SS Rclone gives Failed to create file system for "remote:": Bad Request -.PP -Due to an oddity of the underlying swift library, it gives a "Bad -Request" error rather than a more sensible error when the authentication -fails for Swift. -.PP -So this most likely means your username / password is wrong. -You can investigate further with the \f[C]\-\-dump\-bodies\f[] flag. -.PP -This may also be caused by specifying the region when you shouldn\[aq]t -have (eg OVH). -.SS Rclone gives Failed to create file system: Response didn\[aq]t have -storage storage url and auth token -.PP -This is most likely caused by forgetting to specify your tenant when -setting up a swift remote. -.SS Dropbox -.PP -Paths are specified as \f[C]remote:path\f[] -.PP -Dropbox paths may be as deep as required, eg -\f[C]remote:directory/subdirectory\f[]. -.PP -The initial setup for dropbox involves getting a token from Dropbox -which you need to do in your browser. -\f[C]rclone\ config\f[] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[]. -First run: -.IP -.nf -\f[C] -\ rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -n)\ New\ remote -d)\ Delete\ remote -q)\ Quit\ config -e/n/d/q>\ n -name>\ remote -Type\ of\ storage\ to\ configure. +\ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step +\ \ \ \\\ "false" +\ 2\ /\ Get\ AWS\ credentials\ from\ the\ environment\ (env\ vars\ or\ IAM) +\ \ \ \\\ "true" +env_auth>\ 1 +AWS\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. +access_key_id>\ YOURACCESSKEY +AWS\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. +secret_access_key>\ YOURSECRETACCESSKEY +Region\ to\ connect\ to. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 4 -Dropbox\ App\ Key\ \-\ leave\ blank\ normally. -app_key> -Dropbox\ App\ Secret\ \-\ leave\ blank\ normally. -app_secret> -Remote\ config -Please\ visit: -https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code -Enter\ the\ code:\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -app_key\ = -app_secret\ = -token\ =\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -You can then use it like this, -.PP -List directories in top level of your dropbox -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -List all the files in your dropbox -.IP -.nf -\f[C] -rclone\ ls\ remote: -\f[] -.fi -.PP -To copy a local directory to a dropbox directory called backup -.IP -.nf -\f[C] -rclone\ copy\ /home/source\ remote:backup -\f[] -.fi -.SS Modified time and Hashes -.PP -Dropbox supports modified times, but the only way to set a modification -time is to re\-upload the file. -.PP -This means that if you uploaded your data with an older version of -rclone which didn\[aq]t support the v2 API and modified times, rclone -will decide to upload all your old data to fix the modification times. -If you don\[aq]t want this to happen use \f[C]\-\-size\-only\f[] or -\f[C]\-\-checksum\f[] flag to stop it. -.PP -Dropbox supports its own hash -type (https://www.dropbox.com/developers/reference/content-hash) which -is checked for all transfers. -.SS Specific options -.PP -Here are the command line options specific to this cloud storage system. -.SS \-\-dropbox\-chunk\-size=SIZE -.PP -Upload chunk size. -Max 150M. -The default is 128MB. -Note that this isn\[aq]t buffered into memory. -.SS Limitations -.PP -Note that Dropbox is case insensitive so you can\[aq]t have a file -called "Hello.doc" and one called "hello.doc". -.PP -There are some file names such as \f[C]thumbs.db\f[] which Dropbox -can\[aq]t store. -There is a full list of them in the "Ignored Files" section of this -document (https://www.dropbox.com/en/help/145). -Rclone will issue an error message -\f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempt to -upload one of those file names, but the sync won\[aq]t fail. -.PP -If you have more than 10,000 files in a directory then -\f[C]rclone\ purge\ dropbox:dir\f[] will return the error -\f[C]Failed\ to\ purge:\ There\ are\ too\ many\ files\ involved\ in\ this\ operation\f[]. -As a work\-around do an \f[C]rclone\ delete\ dropbox:dir\f[] followed by -an \f[C]rclone\ rmdir\ dropbox:dir\f[]. -.SS Google Cloud Storage -.PP -Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for -the \f[C]lsd\f[] command.) You may put subdirectories in too, eg -\f[C]remote:bucket/path/to/dir\f[]. -.PP -The initial setup for google cloud storage involves getting a token from -Google Cloud Storage which you need to do in your browser. -\f[C]rclone\ config\f[] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[]. -First run: -.IP -.nf -\f[C] -\ rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -n)\ New\ remote -d)\ Delete\ remote -q)\ Quit\ config -e/n/d/q>\ n -name>\ remote -Type\ of\ storage\ to\ configure. +\ \ \ /\ The\ default\ endpoint\ \-\ a\ good\ choice\ if\ you\ are\ unsure. +\ 1\ |\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. +\ \ \ |\ Leave\ location\ constraint\ empty. +\ \ \ \\\ "us\-east\-1" +[snip] +region>\ us\-east\-1 +Endpoint\ for\ S3\ API. +Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region. +Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph. +endpoint>\ s3.wasabisys.com +Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 6 -Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally. -client_id> -Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally. -client_secret> -Project\ number\ optional\ \-\ needed\ only\ for\ list/create/delete\ buckets\ \-\ see\ your\ developer\ console. -project_number>\ 12345678 -Service\ Account\ Credentials\ JSON\ file\ path\ \-\ needed\ only\ if\ you\ want\ use\ SA\ instead\ of\ interactive\ login. -service_account_file> -Access\ Control\ List\ for\ new\ objects. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access. -\ \ \ \\\ "authenticatedRead" -\ 2\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ OWNER\ access. -\ \ \ \\\ "bucketOwnerFullControl" -\ 3\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ READER\ access. -\ \ \ \\\ "bucketOwnerRead" -\ 4\ /\ Object\ owner\ gets\ OWNER\ access\ [default\ if\ left\ blank]. -\ \ \ \\\ "private" -\ 5\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ members\ get\ access\ according\ to\ their\ roles. -\ \ \ \\\ "projectPrivate" -\ 6\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access. -\ \ \ \\\ "publicRead" -object_acl>\ 4 -Access\ Control\ List\ for\ new\ buckets. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access. -\ \ \ \\\ "authenticatedRead" -\ 2\ /\ Project\ team\ owners\ get\ OWNER\ access\ [default\ if\ left\ blank]. -\ \ \ \\\ "private" -\ 3\ /\ Project\ team\ members\ get\ access\ according\ to\ their\ roles. -\ \ \ \\\ "projectPrivate" -\ 4\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access. -\ \ \ \\\ "publicRead" -\ 5\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ WRITER\ access. -\ \ \ \\\ "publicReadWrite" -bucket_acl>\ 2 -Location\ for\ the\ newly\ created\ buckets. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Empty\ for\ default\ location\ (US). +\ 1\ /\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. \ \ \ \\\ "" -\ 2\ /\ Multi\-regional\ location\ for\ Asia. -\ \ \ \\\ "asia" -\ 3\ /\ Multi\-regional\ location\ for\ Europe. -\ \ \ \\\ "eu" -\ 4\ /\ Multi\-regional\ location\ for\ United\ States. -\ \ \ \\\ "us" -\ 5\ /\ Taiwan. -\ \ \ \\\ "asia\-east1" -\ 6\ /\ Tokyo. -\ \ \ \\\ "asia\-northeast1" -\ 7\ /\ Singapore. -\ \ \ \\\ "asia\-southeast1" -\ 8\ /\ Sydney. -\ \ \ \\\ "australia\-southeast1" -\ 9\ /\ Belgium. -\ \ \ \\\ "europe\-west1" -10\ /\ London. -\ \ \ \\\ "europe\-west2" -11\ /\ Iowa. -\ \ \ \\\ "us\-central1" -12\ /\ South\ Carolina. -\ \ \ \\\ "us\-east1" -13\ /\ Northern\ Virginia. -\ \ \ \\\ "us\-east4" -14\ /\ Oregon. -\ \ \ \\\ "us\-west1" -location>\ 12 -The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ Google\ Cloud\ Storage. +[snip] +location_constraint>\ +Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3. +For\ more\ info\ visit\ https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default). +\ \ \ \\\ "private" +[snip] +acl>\ +The\ server\-side\ encryption\ algorithm\ used\ when\ storing\ this\ object\ in\ S3. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ None +\ \ \ \\\ "" +\ 2\ /\ AES256 +\ \ \ \\\ "AES256" +server_side_encryption>\ +The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ S3. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Default \ \ \ \\\ "" -\ 2\ /\ Multi\-regional\ storage\ class -\ \ \ \\\ "MULTI_REGIONAL" -\ 3\ /\ Regional\ storage\ class -\ \ \ \\\ "REGIONAL" -\ 4\ /\ Nearline\ storage\ class -\ \ \ \\\ "NEARLINE" -\ 5\ /\ Coldline\ storage\ class -\ \ \ \\\ "COLDLINE" -\ 6\ /\ Durable\ reduced\ availability\ storage\ class -\ \ \ \\\ "DURABLE_REDUCED_AVAILABILITY" -storage_class>\ 5 +\ 2\ /\ Standard\ storage\ class +\ \ \ \\\ "STANDARD" +\ 3\ /\ Reduced\ redundancy\ storage\ class +\ \ \ \\\ "REDUCED_REDUNDANCY" +\ 4\ /\ Standard\ Infrequent\ Access\ storage\ class +\ \ \ \\\ "STANDARD_IA" +storage_class>\ Remote\ config -Use\ auto\ config? -\ *\ Say\ Y\ if\ not\ sure -\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine\ or\ Y\ didn\[aq]t\ work -y)\ Yes -n)\ No -y/n>\ y -If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth -Log\ in\ and\ authorize\ rclone\ for\ access -Waiting\ for\ code... -Got\ code \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -type\ =\ google\ cloud\ storage -client_id\ = -client_secret\ = -token\ =\ {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014\-07\-17T20:49:14.929208288+01:00","Extra":null} -project_number\ =\ 12345678 -object_acl\ =\ private -bucket_acl\ =\ private +[wasabi] +env_auth\ =\ false +access_key_id\ =\ YOURACCESSKEY +secret_access_key\ =\ YOURSECRETACCESSKEY +region\ =\ us\-east\-1 +endpoint\ =\ s3.wasabisys.com +location_constraint\ =\ +acl\ =\ +server_side_encryption\ =\ +storage_class\ =\ \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote @@ -5158,639 +5039,22 @@ y/e/d>\ y \f[] .fi .PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if you use auto config mode. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you -to unblock it temporarily if you are running a host firewall, or use -manual mode. -.PP -This remote is called \f[C]remote\f[] and can now be used like this -.PP -See all the buckets in your project +This will leave the config file looking like this. .IP .nf \f[C] -rclone\ lsd\ remote: +[wasabi] +env_auth\ =\ false +access_key_id\ =\ YOURACCESSKEY +secret_access_key\ =\ YOURSECRETACCESSKEY +region\ =\ us\-east\-1 +endpoint\ =\ s3.wasabisys.com +location_constraint\ =\ +acl\ =\ +server_side_encryption\ =\ +storage_class\ =\ \f[] .fi -.PP -Make a new bucket -.IP -.nf -\f[C] -rclone\ mkdir\ remote:bucket -\f[] -.fi -.PP -List the contents of a bucket -.IP -.nf -\f[C] -rclone\ ls\ remote:bucket -\f[] -.fi -.PP -Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any -excess files in the bucket. -.IP -.nf -\f[C] -rclone\ sync\ /home/local/directory\ remote:bucket -\f[] -.fi -.SS Service Account support -.PP -You can set up rclone with Google Cloud Storage in an unattended mode, -i.e. -not tied to a specific end\-user Google account. -This is useful when you want to synchronise files onto machines that -don\[aq]t have actively logged\-in users, for example build machines. -.PP -To get credentials for Google Cloud Platform IAM Service -Accounts (https://cloud.google.com/iam/docs/service-accounts), please -head to the Service -Account (https://console.cloud.google.com/permissions/serviceaccounts) -section of the Google Developer Console. -Service Accounts behave just like normal \f[C]User\f[] permissions in -Google Cloud Storage -ACLs (https://cloud.google.com/storage/docs/access-control), so you can -limit their access (e.g. -make them read only). -After creating an account, a JSON file containing the Service -Account\[aq]s credentials will be downloaded onto your machines. -These credentials are what rclone will use for authentication. -.PP -To use a Service Account instead of OAuth2 token flow, enter the path to -your Service Account credentials at the \f[C]service_account_file\f[] -prompt and rclone won\[aq]t use the browser based authentication flow. -.SS \-\-fast\-list -.PP -This remote supports \f[C]\-\-fast\-list\f[] which allows you to use -fewer transactions in exchange for more memory. -See the rclone docs (/docs/#fast-list) for more details. -.SS Modified time -.PP -Google google cloud storage stores md5sums natively and rclone stores -modification times as metadata on the object, under the "mtime" key in -RFC3339 format accurate to 1ns. -.SS Amazon Drive -.PP -Paths are specified as \f[C]remote:path\f[] -.PP -Paths may be as deep as required, eg -\f[C]remote:directory/subdirectory\f[]. -.PP -The initial setup for Amazon Drive involves getting a token from Amazon -which you need to do in your browser. -\f[C]rclone\ config\f[] walks you through it. -.PP -The configuration process for Amazon Drive may involve using an oauth -proxy (https://github.com/ncw/oauthproxy). -This is used to keep the Amazon credentials out of the source code. -The proxy runs in Google\[aq]s very secure App Engine environment and -doesn\[aq]t store any credentials which pass through it. -.PP -\f[B]NB\f[] rclone doesn\[aq]t not currently have its own Amazon Drive -credentials (see the -forum (https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/) -for why) so you will either need to have your own \f[C]client_id\f[] and -\f[C]client_secret\f[] with Amazon Drive, or use a a third party ouath -proxy in which case you will need to enter \f[C]client_id\f[], -\f[C]client_secret\f[], \f[C]auth_url\f[] and \f[C]token_url\f[]. -.PP -Note also if you are not using Amazon\[aq]s \f[C]auth_url\f[] and -\f[C]token_url\f[], (ie you filled in something for those) then if -setting up on a remote machine you can only use the copying the config -method of -configuration (https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) -\- \f[C]rclone\ authorize\f[] will not work. -.PP -Here is an example of how to make a remote called \f[C]remote\f[]. -First run: -.IP -.nf -\f[C] -\ rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No\ remotes\ found\ \-\ make\ a\ new\ one -n)\ New\ remote -r)\ Rename\ remote -c)\ Copy\ remote -s)\ Set\ configuration\ password -q)\ Quit\ config -n/r/c/s/q>\ n -name>\ remote -Type\ of\ storage\ to\ configure. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 9\ /\ Hubic -\ \ \ \\\ "hubic" -10\ /\ Local\ Disk -\ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 1 -Amazon\ Application\ Client\ Id\ \-\ required. -client_id>\ your\ client\ ID\ goes\ here -Amazon\ Application\ Client\ Secret\ \-\ required. -client_secret>\ your\ client\ secret\ goes\ here -Auth\ server\ URL\ \-\ leave\ blank\ to\ use\ Amazon\[aq]s. -auth_url>\ Optional\ auth\ URL -Token\ server\ url\ \-\ leave\ blank\ to\ use\ Amazon\[aq]s. -token_url>\ Optional\ token\ URL -Remote\ config -Make\ sure\ your\ Redirect\ URL\ is\ set\ to\ "http://127.0.0.1:53682/"\ in\ your\ custom\ config. -Use\ auto\ config? -\ *\ Say\ Y\ if\ not\ sure -\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine -y)\ Yes -n)\ No -y/n>\ y -If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth -Log\ in\ and\ authorize\ rclone\ for\ access -Waiting\ for\ code... -Got\ code -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -client_id\ =\ your\ client\ ID\ goes\ here -client_secret\ =\ your\ client\ secret\ goes\ here -auth_url\ =\ Optional\ auth\ URL -token_url\ =\ Optional\ token\ URL -token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015\-09\-06T16:07:39.658438471+01:00"} -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Amazon. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[] like this, -.PP -List directories in top level of your Amazon Drive -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -List all the files in your Amazon Drive -.IP -.nf -\f[C] -rclone\ ls\ remote: -\f[] -.fi -.PP -To copy a local directory to an Amazon Drive directory called backup -.IP -.nf -\f[C] -rclone\ copy\ /home/source\ remote:backup -\f[] -.fi -.SS Modified time and MD5SUMs -.PP -Amazon Drive doesn\[aq]t allow modification times to be changed via the -API so these won\[aq]t be accurate or used for syncing. -.PP -It does store MD5SUMs so for a more accurate sync, you can use the -\f[C]\-\-checksum\f[] flag. -.SS Deleting files -.PP -Any files you delete with rclone will end up in the trash. -Amazon don\[aq]t provide an API to permanently delete files, nor to -empty the trash, so you will have to do that with one of Amazon\[aq]s -apps or via the Amazon Drive website. -As of November 17, 2016, files are automatically deleted by Amazon from -the trash after 30 days. -.SS Using with non \f[C]\&.com\f[] Amazon accounts -.PP -Let\[aq]s say you usually use \f[C]amazon.co.uk\f[]. -When you authenticate with rclone it will take you to an -\f[C]amazon.com\f[] page to log in. -Your \f[C]amazon.co.uk\f[] email and password should work here just -fine. -.SS Specific options -.PP -Here are the command line options specific to this cloud storage system. -.SS \-\-acd\-templink\-threshold=SIZE -.PP -Files this size or more will be downloaded via their \f[C]tempLink\f[]. -This is to work around a problem with Amazon Drive which blocks -downloads of files bigger than about 10GB. -The default for this is 9GB which shouldn\[aq]t need to be changed. -.PP -To download files above this threshold, rclone requests a -\f[C]tempLink\f[] which downloads the file through a temporary URL -directly from the underlying S3 storage. -.SS \-\-acd\-upload\-wait\-per\-gb=TIME -.PP -Sometimes Amazon Drive gives an error when a file has been fully -uploaded but the file appears anyway after a little while. -This happens sometimes for files over 1GB in size and nearly every time -for files bigger than 10GB. -This parameter controls the time rclone waits for the file to appear. -.PP -The default value for this parameter is 3 minutes per GB, so by default -it will wait 3 minutes for every GB uploaded to see if the file appears. -.PP -You can disable this feature by setting it to 0. -This may cause conflict errors as rclone retries the failed upload but -the file will most likely appear correctly eventually. -.PP -These values were determined empirically by observing lots of uploads of -big files for a range of file sizes. -.PP -Upload with the \f[C]\-v\f[] flag to see more info about what rclone is -doing in this situation. -.SS Limitations -.PP -Note that Amazon Drive is case insensitive so you can\[aq]t have a file -called "Hello.doc" and one called "hello.doc". -.PP -Amazon Drive has rate limiting so you may notice errors in the sync (429 -errors). -rclone will automatically retry the sync up to 3 times by default (see -\f[C]\-\-retries\f[] flag) which should hopefully work around this -problem. -.PP -Amazon Drive has an internal limit of file sizes that can be uploaded to -the service. -This limit is not officially published, but all files larger than this -will fail. -.PP -At the time of writing (Jan 2016) is in the area of 50GB per file. -This means that larger files are likely to fail. -.PP -Unfortunately there is no way for rclone to see that this failure is -because of file size, so it will retry the operation, as any other -failure. -To avoid this problem, use \f[C]\-\-max\-size\ 50000M\f[] option to -limit the maximum size of uploaded files. -Note that \f[C]\-\-max\-size\f[] does not split files into segments, it -only ignores files over this size. -.SS Microsoft OneDrive -.PP -Paths are specified as \f[C]remote:path\f[] -.PP -Paths may be as deep as required, eg -\f[C]remote:directory/subdirectory\f[]. -.PP -The initial setup for OneDrive involves getting a token from Microsoft -which you need to do in your browser. -\f[C]rclone\ config\f[] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[]. -First run: -.IP -.nf -\f[C] -\ rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No\ remotes\ found\ \-\ make\ a\ new\ one -n)\ New\ remote -s)\ Set\ configuration\ password -n/s>\ n -name>\ remote -Type\ of\ storage\ to\ configure. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 10 -Microsoft\ App\ Client\ Id\ \-\ leave\ blank\ normally. -client_id> -Microsoft\ App\ Client\ Secret\ \-\ leave\ blank\ normally. -client_secret> -Remote\ config -Use\ auto\ config? -\ *\ Say\ Y\ if\ not\ sure -\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine -y)\ Yes -n)\ No -y/n>\ y -If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth -Log\ in\ and\ authorize\ rclone\ for\ access -Waiting\ for\ code... -Got\ code -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -client_id\ = -client_secret\ = -token\ =\ {"access_token":"XXXXXX"} -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Microsoft. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[] like this, -.PP -List directories in top level of your OneDrive -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -List all the files in your OneDrive -.IP -.nf -\f[C] -rclone\ ls\ remote: -\f[] -.fi -.PP -To copy a local directory to an OneDrive directory called backup -.IP -.nf -\f[C] -rclone\ copy\ /home/source\ remote:backup -\f[] -.fi -.SS Modified time and hashes -.PP -OneDrive allows modification times to be set on objects accurate to 1 -second. -These will be used to detect whether objects need syncing or not. -.PP -One drive supports SHA1 type hashes, so you can use -\f[C]\-\-checksum\f[] flag. -.SS Deleting files -.PP -Any files you delete with rclone will end up in the trash. -Microsoft doesn\[aq]t provide an API to permanently delete files, nor to -empty the trash, so you will have to do that with one of Microsoft\[aq]s -apps or via the OneDrive website. -.SS Specific options -.PP -Here are the command line options specific to this cloud storage system. -.SS \-\-onedrive\-chunk\-size=SIZE -.PP -Above this size files will be chunked \- must be multiple of 320k. -The default is 10MB. -Note that the chunks will be buffered into memory. -.SS \-\-onedrive\-upload\-cutoff=SIZE -.PP -Cutoff for switching to chunked upload \- must be <= 100MB. -The default is 10MB. -.SS Limitations -.PP -Note that OneDrive is case insensitive so you can\[aq]t have a file -called "Hello.doc" and one called "hello.doc". -.PP -Rclone only supports your default OneDrive, and doesn\[aq]t work with -One Drive for business. -Both these issues may be fixed at some point depending on user demand! -.PP -There are quite a few characters that can\[aq]t be in OneDrive file -names. -These can\[aq]t occur on Windows platforms, but on non\-Windows -platforms they are common. -Rclone will map these names to and from an identical looking unicode -equivalent. -For example if a file has a \f[C]?\f[] in it will be mapped to -\f[C]?\f[] instead. -.PP -The largest allowed file size is 10GiB (10,737,418,240 bytes). -.SS Hubic -.PP -Paths are specified as \f[C]remote:path\f[] -.PP -Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] -for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg -\f[C]remote:container/path/to/dir\f[]. -.PP -The initial setup for Hubic involves getting a token from Hubic which -you need to do in your browser. -\f[C]rclone\ config\f[] walks you through it. -.PP -Here is an example of how to make a remote called \f[C]remote\f[]. -First run: -.IP -.nf -\f[C] -\ rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -n)\ New\ remote -s)\ Set\ configuration\ password -n/s>\ n -name>\ remote -Type\ of\ storage\ to\ configure. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 8 -Hubic\ Client\ Id\ \-\ leave\ blank\ normally. -client_id> -Hubic\ Client\ Secret\ \-\ leave\ blank\ normally. -client_secret> -Remote\ config -Use\ auto\ config? -\ *\ Say\ Y\ if\ not\ sure -\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine -y)\ Yes -n)\ No -y/n>\ y -If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth -Log\ in\ and\ authorize\ rclone\ for\ access -Waiting\ for\ code... -Got\ code -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -client_id\ = -client_secret\ = -token\ =\ {"access_token":"XXXXXX"} -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Hubic. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[] like this, -.PP -List containers in the top level of your Hubic -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -List all the files in your Hubic -.IP -.nf -\f[C] -rclone\ ls\ remote: -\f[] -.fi -.PP -To copy a local directory to an Hubic directory called backup -.IP -.nf -\f[C] -rclone\ copy\ /home/source\ remote:backup -\f[] -.fi -.PP -If you want the directory to be visible in the official \f[I]Hubic -browser\f[], you need to copy your files to the \f[C]default\f[] -directory -.IP -.nf -\f[C] -rclone\ copy\ /home/source\ remote:default/backup -\f[] -.fi -.SS \-\-fast\-list -.PP -This remote supports \f[C]\-\-fast\-list\f[] which allows you to use -fewer transactions in exchange for more memory. -See the rclone docs (/docs/#fast-list) for more details. -.SS Modified time -.PP -The modified time is stored as metadata on the object as -\f[C]X\-Object\-Meta\-Mtime\f[] as floating point since the epoch -accurate to 1 ns. -.PP -This is a defacto standard (used in the official python\-swiftclient -amongst others) for storing the modification time for an object. -.PP -Note that Hubic wraps the Swift backend, so most of the properties of -are the same. -.SS Limitations -.PP -This uses the normal OpenStack Swift mechanism to refresh the Swift API -credentials and ignores the expires field returned by the Hubic API. -.PP -The Swift API doesn\[aq]t return a correct MD5SUM for segmented files -(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the -MD5SUM for these. .SS Backblaze B2 .PP B2 is Backblaze\[aq]s cloud storage @@ -5947,11 +5211,14 @@ moment, so this sets the upper limit on the memory used. .PP When rclone uploads a new version of a file it creates a new version of it (https://www.backblaze.com/b2/docs/file_versions.html). -Likewise when you delete a file, the old version will still be -available. +Likewise when you delete a file, the old version will be marked hidden +and still be available. +Conversely, you may opt in to a "hard delete" of files with the +\f[C]\-\-b2\-hard\-delete\f[] flag which would permanently remove the +file instead of hiding it. .PP -Old versions of files are visible using the \f[C]\-\-b2\-versions\f[] -flag. +Old versions of files, where available, are visible using the +\f[C]\-\-b2\-versions\f[] flag. .PP If you wish to remove all the old versions then you can use the \f[C]rclone\ cleanup\ remote:bucket\f[] command which will delete all @@ -6129,489 +5396,16 @@ nearest millisecond appended to them. .PP Note that when using \f[C]\-\-b2\-versions\f[] no file write operations are permitted, so you can\[aq]t upload files or delete them. -.SS Yandex Disk +.SS Box .PP -Yandex Disk (https://disk.yandex.com) is a cloud storage solution -created by Yandex (https://yandex.com). +Paths are specified as \f[C]remote:path\f[] .PP -Yandex paths may be as deep as required, eg +Paths may be as deep as required, eg \f[C]remote:directory/subdirectory\f[]. .PP -Here is an example of making a yandex configuration. -First run -.IP -.nf -\f[C] -rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No\ remotes\ found\ \-\ make\ a\ new\ one -n)\ New\ remote -s)\ Set\ configuration\ password -n/s>\ n -name>\ remote -Type\ of\ storage\ to\ configure. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 13 -Yandex\ Client\ Id\ \-\ leave\ blank\ normally. -client_id> -Yandex\ Client\ Secret\ \-\ leave\ blank\ normally. -client_secret> -Remote\ config -Use\ auto\ config? -\ *\ Say\ Y\ if\ not\ sure -\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine -y)\ Yes -n)\ No -y/n>\ y -If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth -Log\ in\ and\ authorize\ rclone\ for\ access -Waiting\ for\ code... -Got\ code -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -client_id\ = -client_secret\ = -token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016\-12\-29T12:27:11.362788025Z"} -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Yandex Disk. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[] like this, -.PP -See top level directories -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -Make a new directory -.IP -.nf -\f[C] -rclone\ mkdir\ remote:directory -\f[] -.fi -.PP -List the contents of a directory -.IP -.nf -\f[C] -rclone\ ls\ remote:directory -\f[] -.fi -.PP -Sync \f[C]/home/local/directory\f[] to the remote path, deleting any -excess files in the path. -.IP -.nf -\f[C] -rclone\ sync\ /home/local/directory\ remote:directory -\f[] -.fi -.SS \-\-fast\-list -.PP -This remote supports \f[C]\-\-fast\-list\f[] which allows you to use -fewer transactions in exchange for more memory. -See the rclone docs (/docs/#fast-list) for more details. -.SS Modified time -.PP -Modified times are supported and are stored accurate to 1 ns in custom -metadata called \f[C]rclone_modified\f[] in RFC3339 with nanoseconds -format. -.SS MD5 checksums -.PP -MD5 checksums are natively supported by Yandex Disk. -.SS SFTP -.PP -SFTP is the Secure (or SSH) File Transfer -Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). -.PP -It runs over SSH v2 and is standard with most modern SSH installations. -.PP -Paths are specified as \f[C]remote:path\f[]. -If the path does not begin with a \f[C]/\f[] it is relative to the home -directory of the user. -An empty path \f[C]remote:\f[] refers to the users home directory. -.PP -Here is an example of making a SFTP configuration. -First run -.IP -.nf -\f[C] -rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process. -.IP -.nf -\f[C] -No\ remotes\ found\ \-\ make\ a\ new\ one -n)\ New\ remote -s)\ Set\ configuration\ password -q)\ Quit\ config -n/s/q>\ n -name>\ remote -Type\ of\ storage\ to\ configure. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 9\ /\ Hubic -\ \ \ \\\ "hubic" -10\ /\ Local\ Disk -\ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -15\ /\ http\ Connection -\ \ \ \\\ "http" -Storage>\ sftp -SSH\ host\ to\ connect\ to -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Connect\ to\ example.com -\ \ \ \\\ "example.com" -host>\ example.com -SSH\ username,\ leave\ blank\ for\ current\ username,\ ncw -user>\ sftpuser -SSH\ port,\ leave\ blank\ to\ use\ default\ (22) -port>\ -SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent. -y)\ Yes\ type\ in\ my\ own\ password -g)\ Generate\ random\ password -n)\ No\ leave\ this\ optional\ password\ blank -y/g/n>\ n -Path\ to\ unencrypted\ PEM\-encoded\ private\ key\ file,\ leave\ blank\ to\ use\ ssh\-agent. -key_file>\ -Remote\ config -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -host\ =\ example.com -user\ =\ sftpuser -port\ =\ -pass\ =\ -key_file\ =\ -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -This remote is called \f[C]remote\f[] and can now be used like this -.PP -See all directories in the home directory -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -Make a new directory -.IP -.nf -\f[C] -rclone\ mkdir\ remote:path/to/directory -\f[] -.fi -.PP -List the contents of a directory -.IP -.nf -\f[C] -rclone\ ls\ remote:path/to/directory -\f[] -.fi -.PP -Sync \f[C]/home/local/directory\f[] to the remote directory, deleting -any excess files in the directory. -.IP -.nf -\f[C] -rclone\ sync\ /home/local/directory\ remote:directory -\f[] -.fi -.SS SSH Authentication -.PP -The SFTP remote supports 3 authentication methods -.IP \[bu] 2 -Password -.IP \[bu] 2 -Key file -.IP \[bu] 2 -ssh\-agent -.PP -Key files should be unencrypted PEM\-encoded private key files. -For instance \f[C]/home/$USER/.ssh/id_rsa\f[]. -.PP -If you don\[aq]t specify \f[C]pass\f[] or \f[C]key_file\f[] then it will -attempt to contact an ssh\-agent. -.SS ssh\-agent on macOS -.PP -Note that there seem to be various problems with using an ssh\-agent on -macOS due to recent changes in the OS. -The most effective work\-around seems to be to start an ssh\-agent in -each session, eg -.IP -.nf -\f[C] -eval\ `ssh\-agent\ \-s`\ &&\ ssh\-add\ \-A -\f[] -.fi -.PP -And then at the end of the session -.IP -.nf -\f[C] -eval\ `ssh\-agent\ \-k` -\f[] -.fi -.PP -These commands can be used in scripts of course. -.SS Modified time -.PP -Modified times are stored on the server to 1 second precision. -.PP -Modified times are used in syncing and are fully supported. -.SS Limitations -.PP -SFTP does not support any checksums. -.PP -The only ssh agent supported under Windows is Putty\[aq]s pagent. -.PP -SFTP isn\[aq]t supported under plan9 until this -issue (https://github.com/pkg/sftp/issues/156) is fixed. -.PP -Note that since SFTP isn\[aq]t HTTP based the following flags don\[aq]t -work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[], -\f[C]\-\-dump\-auth\f[] -.PP -Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but -\f[C]\-\-contimeout\f[] is). -.SS FTP -.PP -FTP is the File Transfer Protocol. -FTP support is provided using the -github.com/jlaffaye/ftp (https://godoc.org/github.com/jlaffaye/ftp) -package. -.PP -Here is an example of making an FTP configuration. -First run -.IP -.nf -\f[C] -rclone\ config -\f[] -.fi -.PP -This will guide you through an interactive setup process. -An FTP remote only needs a host together with and a username and a -password. -With anonymous FTP server, you will need to use \f[C]anonymous\f[] as -username and your email address as the password. -.IP -.nf -\f[C] -No\ remotes\ found\ \-\ make\ a\ new\ one -n)\ New\ remote -r)\ Rename\ remote -c)\ Copy\ remote -s)\ Set\ configuration\ password -q)\ Quit\ config -n/r/c/s/q>\ n -name>\ remote -Type\ of\ storage\ to\ configure. -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection\ -\ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 9\ /\ Hubic -\ \ \ \\\ "hubic" -10\ /\ Local\ Disk -\ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ ftp -FTP\ host\ to\ connect\ to -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Connect\ to\ ftp.example.com -\ \ \ \\\ "ftp.example.com" -host>\ ftp.example.com -FTP\ username,\ leave\ blank\ for\ current\ username,\ ncw -user> -FTP\ port,\ leave\ blank\ to\ use\ default\ (21) -port> -FTP\ password -y)\ Yes\ type\ in\ my\ own\ password -g)\ Generate\ random\ password -y/g>\ y -Enter\ the\ password: -password: -Confirm\ the\ password: -password: -Remote\ config -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -[remote] -host\ =\ ftp.example.com -user\ =\ -port\ = -pass\ =\ ***\ ENCRYPTED\ *** -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- -y)\ Yes\ this\ is\ OK -e)\ Edit\ this\ remote -d)\ Delete\ this\ remote -y/e/d>\ y -\f[] -.fi -.PP -This remote is called \f[C]remote\f[] and can now be used like this -.PP -See all directories in the home directory -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi -.PP -Make a new directory -.IP -.nf -\f[C] -rclone\ mkdir\ remote:path/to/directory -\f[] -.fi -.PP -List the contents of a directory -.IP -.nf -\f[C] -rclone\ ls\ remote:path/to/directory -\f[] -.fi -.PP -Sync \f[C]/home/local/directory\f[] to the remote directory, deleting -any excess files in the directory. -.IP -.nf -\f[C] -rclone\ sync\ /home/local/directory\ remote:directory -\f[] -.fi -.SS Modified time -.PP -FTP does not support modified times. -Any times you see on the server will be time of upload. -.SS Checksums -.PP -FTP does not support any checksums. -.SS Limitations -.PP -Note that since FTP isn\[aq]t HTTP based the following flags don\[aq]t -work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[], -\f[C]\-\-dump\-auth\f[] -.PP -Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but -\f[C]\-\-contimeout\f[] is). -.PP -FTP could support server side move but doesn\[aq]t yet. -.SS HTTP -.PP -The HTTP remote is a read only remote for reading files of a webserver. -The webserver should provide file listings which rclone will read and -turn into a remote. -This has been tested with common webservers such as Apache/Nginx/Caddy -and will likely work with file listings from most web servers. -(If it doesn\[aq]t then please file an issue, or send a pull request!) -.PP -Paths are specified as \f[C]remote:\f[] or \f[C]remote:path/to/dir\f[]. +The initial setup for Box involves getting a token from Box which you +need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. .PP Here is an example of how to make a remote called \f[C]remote\f[]. First run: @@ -6640,50 +5434,134 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "s3" \ 3\ /\ Backblaze\ B2 \ \ \ \\\ "b2" -\ 4\ /\ Dropbox +\ 4\ /\ Box +\ \ \ \\\ "box" +\ 5\ /\ Dropbox \ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ 6\ /\ Encrypt/Decrypt\ a\ remote \ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection +\ 7\ /\ FTP\ Connection \ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) \ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive +\ 9\ /\ Google\ Drive \ \ \ \\\ "drive" -\ 9\ /\ Hubic +10\ /\ Hubic \ \ \ \\\ "hubic" -10\ /\ Local\ Disk +11\ /\ Local\ Disk \ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive +12\ /\ Microsoft\ OneDrive \ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +13\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) \ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection +14\ /\ SSH/SFTP\ Connection \ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk +15\ /\ Yandex\ Disk \ \ \ \\\ "yandex" -15\ /\ http\ Connection +16\ /\ http\ Connection \ \ \ \\\ "http" -Storage>\ http -URL\ of\ http\ host\ to\ connect\ to -Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Connect\ to\ example.com -\ \ \ \\\ "https://example.com" -url>\ https://beta.rclone.org +Storage>\ box +Box\ App\ Client\ Id\ \-\ leave\ blank\ normally. +client_id>\ +Box\ App\ Client\ Secret\ \-\ leave\ blank\ normally. +client_secret>\ Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [remote] -url\ =\ https://beta.rclone.org +client_id\ =\ +client_secret\ =\ +token\ =\ {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"} \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote d)\ Delete\ this\ remote y/e/d>\ y +\f[] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Box. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +List directories in top level of your Box +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your Box +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to an Box directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Invalid refresh token +.PP +According to the box +docs (https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): +.RS +.PP +Each refresh_token is valid for one use in 60 days. +.RE +.PP +This means that if you +.IP \[bu] 2 +Don\[aq]t use the box remote for 60 days +.IP \[bu] 2 +Copy the config file with a box refresh token in and use it in two +places +.IP \[bu] 2 +Get an error on a token refresh +.PP +then rclone will return an error which includes the text +\f[C]Invalid\ refresh\ token\f[]. +.PP +To fix this you will need to use oauth2 again to update the refresh +token. +You can use the methods in the remote setup +docs (https://rclone.org/remote_setup/), bearing in mind that if you use +the copy the config file method, you should not use that remote on the +computer you did the authentication on. +.PP +Here is how to do it. +.IP +.nf +\f[C] +$\ rclone\ config Current\ remotes: Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type ====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ==== -remote\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ http +remote\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ box e)\ Edit\ existing\ remote n)\ New\ remote @@ -6692,66 +5570,87 @@ r)\ Rename\ remote c)\ Copy\ remote s)\ Set\ configuration\ password q)\ Quit\ config -e/n/d/r/c/s/q>\ q +e/n/d/r/c/s/q>\ e +Choose\ a\ number\ from\ below,\ or\ type\ in\ an\ existing\ value +\ 1\ >\ remote +remote>\ remote +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ box +token\ =\ {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017\-07\-08T23:40:08.059167677+01:00"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +Edit\ remote +Value\ "client_id"\ =\ "" +Edit?\ (y/n)> +y)\ Yes +n)\ No +y/n>\ n +Value\ "client_secret"\ =\ "" +Edit?\ (y/n)> +y)\ Yes +n)\ No +y/n>\ n +Remote\ config +Already\ have\ a\ token\ \-\ refresh? +y)\ Yes +n)\ No +y/n>\ y +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ box +token\ =\ {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017\-07\-23T12:22:29.259137901+01:00"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y \f[] .fi +.SS Modified time and hashes .PP -This remote is called \f[C]remote\f[] and can now be used like this +Box allows modification times to be set on objects accurate to 1 second. +These will be used to detect whether objects need syncing or not. .PP -See all the top level directories -.IP -.nf -\f[C] -rclone\ lsd\ remote: -\f[] -.fi +One drive supports SHA1 type hashes, so you can use the +\f[C]\-\-checksum\f[] flag. +.SS Transfers .PP -List the contents of a directory -.IP -.nf -\f[C] -rclone\ ls\ remote:directory -\f[] -.fi +For files above 50MB rclone will use a chunked transfer. +Rclone will upload up to \f[C]\-\-transfers\f[] chunks at the same time +(shared among all the multipart uploads). +Chunks are buffered in memory and are normally 8MB so increasing +\f[C]\-\-transfers\f[] will increase memory use. +.SS Deleting files .PP -Sync the remote \f[C]directory\f[] to \f[C]/home/local/directory\f[], -deleting any excess files. -.IP -.nf -\f[C] -rclone\ sync\ remote:directory\ /home/local/directory -\f[] -.fi -.SS Read only +Depending on the enterprise settings for your user, the item will either +be actually deleted from Box or moved to the trash. +.SS Specific options .PP -This remote is read only \- you can\[aq]t upload files to an HTTP -server. -.SS Modified time +Here are the command line options specific to this cloud storage system. +.SS \-\-box\-upload\-cutoff=SIZE .PP -Most HTTP servers store time accurate to 1 second. -.SS Checksum +Cutoff for switching to chunked upload \- must be >= 50MB. +The default is 50MB. +.SS Limitations .PP -No checksums are stored. -.SS Usage without a config file +Note that Box is case insensitive so you can\[aq]t have a file called +"Hello.doc" and one called "hello.doc". .PP -Note that since only two environment variable need to be set, it is easy -to use without a config file like this. -.IP -.nf -\f[C] -RCLONE_CONFIG_ZZ_TYPE=http\ RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org\ rclone\ lsd\ zz: -\f[] -.fi +Box file names can\[aq]t have the \f[C]\\\f[] character in. +rclone maps this to and from an identical looking unicode equivalent +\f[C]\\f[]. .PP -Or if you prefer -.IP -.nf -\f[C] -export\ RCLONE_CONFIG_ZZ_TYPE=http -export\ RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org -rclone\ lsd\ zz: -\f[] -.fi +Box only supports filenames up to 255 characters in length. .SS Crypt .PP The \f[C]crypt\f[] remote encrypts and decrypts another remote. @@ -7197,6 +6096,2510 @@ If the user doesn\[aq]t supply a salt then rclone uses an internal one. \f[C]scrypt\f[] makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt. +.SS Dropbox +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Dropbox paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for dropbox involves getting a token from Dropbox +which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n)\ New\ remote +d)\ Delete\ remote +q)\ Quit\ config +e/n/d/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 7\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 8\ /\ Hubic +\ \ \ \\\ "hubic" +\ 9\ /\ Local\ Disk +\ \ \ \\\ "local" +10\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +12\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +13\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ 4 +Dropbox\ App\ Key\ \-\ leave\ blank\ normally. +app_key> +Dropbox\ App\ Secret\ \-\ leave\ blank\ normally. +app_secret> +Remote\ config +Please\ visit: +https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code +Enter\ the\ code:\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +app_key\ = +app_secret\ = +token\ =\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +You can then use it like this, +.PP +List directories in top level of your dropbox +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your dropbox +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to a dropbox directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Modified time and Hashes +.PP +Dropbox supports modified times, but the only way to set a modification +time is to re\-upload the file. +.PP +This means that if you uploaded your data with an older version of +rclone which didn\[aq]t support the v2 API and modified times, rclone +will decide to upload all your old data to fix the modification times. +If you don\[aq]t want this to happen use \f[C]\-\-size\-only\f[] or +\f[C]\-\-checksum\f[] flag to stop it. +.PP +Dropbox supports its own hash +type (https://www.dropbox.com/developers/reference/content-hash) which +is checked for all transfers. +.SS Specific options +.PP +Here are the command line options specific to this cloud storage system. +.SS \-\-dropbox\-chunk\-size=SIZE +.PP +Upload chunk size. +Max 150M. +The default is 128MB. +Note that this isn\[aq]t buffered into memory. +.SS Limitations +.PP +Note that Dropbox is case insensitive so you can\[aq]t have a file +called "Hello.doc" and one called "hello.doc". +.PP +There are some file names such as \f[C]thumbs.db\f[] which Dropbox +can\[aq]t store. +There is a full list of them in the "Ignored Files" section of this +document (https://www.dropbox.com/en/help/145). +Rclone will issue an error message +\f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempt to +upload one of those file names, but the sync won\[aq]t fail. +.PP +If you have more than 10,000 files in a directory then +\f[C]rclone\ purge\ dropbox:dir\f[] will return the error +\f[C]Failed\ to\ purge:\ There\ are\ too\ many\ files\ involved\ in\ this\ operation\f[]. +As a work\-around do an \f[C]rclone\ delete\ dropbox:dir\f[] followed by +an \f[C]rclone\ rmdir\ dropbox:dir\f[]. +.SS FTP +.PP +FTP is the File Transfer Protocol. +FTP support is provided using the +github.com/jlaffaye/ftp (https://godoc.org/github.com/jlaffaye/ftp) +package. +.PP +Here is an example of making an FTP configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +An FTP remote only needs a host together with and a username and a +password. +With anonymous FTP server, you will need to use \f[C]anonymous\f[] as +username and your email address as the password. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/r/c/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ FTP\ Connection\ +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 8\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 9\ /\ Hubic +\ \ \ \\\ "hubic" +10\ /\ Local\ Disk +\ \ \ \\\ "local" +11\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +13\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +14\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ ftp +FTP\ host\ to\ connect\ to +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Connect\ to\ ftp.example.com +\ \ \ \\\ "ftp.example.com" +host>\ ftp.example.com +FTP\ username,\ leave\ blank\ for\ current\ username,\ ncw +user> +FTP\ port,\ leave\ blank\ to\ use\ default\ (21) +port> +FTP\ password +y)\ Yes\ type\ in\ my\ own\ password +g)\ Generate\ random\ password +y/g>\ y +Enter\ the\ password: +password: +Confirm\ the\ password: +password: +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +host\ =\ ftp.example.com +user\ =\ +port\ = +pass\ =\ ***\ ENCRYPTED\ *** +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all directories in the home directory +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new directory +.IP +.nf +\f[C] +rclone\ mkdir\ remote:path/to/directory +\f[] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone\ ls\ remote:path/to/directory +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote directory, deleting +any excess files in the directory. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:directory +\f[] +.fi +.SS Modified time +.PP +FTP does not support modified times. +Any times you see on the server will be time of upload. +.SS Checksums +.PP +FTP does not support any checksums. +.SS Limitations +.PP +Note that since FTP isn\[aq]t HTTP based the following flags don\[aq]t +work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[], +\f[C]\-\-dump\-auth\f[] +.PP +Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but +\f[C]\-\-contimeout\f[] is). +.PP +Note that \f[C]\-\-bind\f[] isn\[aq]t supported. +.PP +FTP could support server side move but doesn\[aq]t yet. +.SS Google Cloud Storage +.PP +Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for +the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:bucket/path/to/dir\f[]. +.PP +The initial setup for google cloud storage involves getting a token from +Google Cloud Storage which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n)\ New\ remote +d)\ Delete\ remote +q)\ Quit\ config +e/n/d/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 7\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 8\ /\ Hubic +\ \ \ \\\ "hubic" +\ 9\ /\ Local\ Disk +\ \ \ \\\ "local" +10\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +12\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +13\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ 6 +Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally. +client_id> +Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally. +client_secret> +Project\ number\ optional\ \-\ needed\ only\ for\ list/create/delete\ buckets\ \-\ see\ your\ developer\ console. +project_number>\ 12345678 +Service\ Account\ Credentials\ JSON\ file\ path\ \-\ needed\ only\ if\ you\ want\ use\ SA\ instead\ of\ interactive\ login. +service_account_file> +Access\ Control\ List\ for\ new\ objects. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access. +\ \ \ \\\ "authenticatedRead" +\ 2\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ OWNER\ access. +\ \ \ \\\ "bucketOwnerFullControl" +\ 3\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ READER\ access. +\ \ \ \\\ "bucketOwnerRead" +\ 4\ /\ Object\ owner\ gets\ OWNER\ access\ [default\ if\ left\ blank]. +\ \ \ \\\ "private" +\ 5\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ members\ get\ access\ according\ to\ their\ roles. +\ \ \ \\\ "projectPrivate" +\ 6\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access. +\ \ \ \\\ "publicRead" +object_acl>\ 4 +Access\ Control\ List\ for\ new\ buckets. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access. +\ \ \ \\\ "authenticatedRead" +\ 2\ /\ Project\ team\ owners\ get\ OWNER\ access\ [default\ if\ left\ blank]. +\ \ \ \\\ "private" +\ 3\ /\ Project\ team\ members\ get\ access\ according\ to\ their\ roles. +\ \ \ \\\ "projectPrivate" +\ 4\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access. +\ \ \ \\\ "publicRead" +\ 5\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ WRITER\ access. +\ \ \ \\\ "publicReadWrite" +bucket_acl>\ 2 +Location\ for\ the\ newly\ created\ buckets. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Empty\ for\ default\ location\ (US). +\ \ \ \\\ "" +\ 2\ /\ Multi\-regional\ location\ for\ Asia. +\ \ \ \\\ "asia" +\ 3\ /\ Multi\-regional\ location\ for\ Europe. +\ \ \ \\\ "eu" +\ 4\ /\ Multi\-regional\ location\ for\ United\ States. +\ \ \ \\\ "us" +\ 5\ /\ Taiwan. +\ \ \ \\\ "asia\-east1" +\ 6\ /\ Tokyo. +\ \ \ \\\ "asia\-northeast1" +\ 7\ /\ Singapore. +\ \ \ \\\ "asia\-southeast1" +\ 8\ /\ Sydney. +\ \ \ \\\ "australia\-southeast1" +\ 9\ /\ Belgium. +\ \ \ \\\ "europe\-west1" +10\ /\ London. +\ \ \ \\\ "europe\-west2" +11\ /\ Iowa. +\ \ \ \\\ "us\-central1" +12\ /\ South\ Carolina. +\ \ \ \\\ "us\-east1" +13\ /\ Northern\ Virginia. +\ \ \ \\\ "us\-east4" +14\ /\ Oregon. +\ \ \ \\\ "us\-west1" +location>\ 12 +The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ Google\ Cloud\ Storage. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Default +\ \ \ \\\ "" +\ 2\ /\ Multi\-regional\ storage\ class +\ \ \ \\\ "MULTI_REGIONAL" +\ 3\ /\ Regional\ storage\ class +\ \ \ \\\ "REGIONAL" +\ 4\ /\ Nearline\ storage\ class +\ \ \ \\\ "NEARLINE" +\ 5\ /\ Coldline\ storage\ class +\ \ \ \\\ "COLDLINE" +\ 6\ /\ Durable\ reduced\ availability\ storage\ class +\ \ \ \\\ "DURABLE_REDUCED_AVAILABILITY" +storage_class>\ 5 +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine\ or\ Y\ didn\[aq]t\ work +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ google\ cloud\ storage +client_id\ = +client_secret\ = +token\ =\ {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014\-07\-17T20:49:14.929208288+01:00","Extra":null} +project_number\ =\ 12345678 +object_acl\ =\ private +bucket_acl\ =\ private +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall, or use +manual mode. +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all the buckets in your project +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new bucket +.IP +.nf +\f[C] +rclone\ mkdir\ remote:bucket +\f[] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone\ ls\ remote:bucket +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any +excess files in the bucket. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:bucket +\f[] +.fi +.SS Service Account support +.PP +You can set up rclone with Google Cloud Storage in an unattended mode, +i.e. +not tied to a specific end\-user Google account. +This is useful when you want to synchronise files onto machines that +don\[aq]t have actively logged\-in users, for example build machines. +.PP +To get credentials for Google Cloud Platform IAM Service +Accounts (https://cloud.google.com/iam/docs/service-accounts), please +head to the Service +Account (https://console.cloud.google.com/permissions/serviceaccounts) +section of the Google Developer Console. +Service Accounts behave just like normal \f[C]User\f[] permissions in +Google Cloud Storage +ACLs (https://cloud.google.com/storage/docs/access-control), so you can +limit their access (e.g. +make them read only). +After creating an account, a JSON file containing the Service +Account\[aq]s credentials will be downloaded onto your machines. +These credentials are what rclone will use for authentication. +.PP +To use a Service Account instead of OAuth2 token flow, enter the path to +your Service Account credentials at the \f[C]service_account_file\f[] +prompt and rclone won\[aq]t use the browser based authentication flow. +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. +.SS Modified time +.PP +Google google cloud storage stores md5sums natively and rclone stores +modification times as metadata on the object, under the "mtime" key in +RFC3339 format accurate to 1ns. +.SS Google Drive +.PP +Paths are specified as \f[C]drive:path\f[] +.PP +Drive paths may be as deep as required, eg +\f[C]drive:directory/subdirectory\f[]. +.PP +The initial setup for drive involves getting a token from Google drive +which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/r/c/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 8\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 9\ /\ Hubic +\ \ \ \\\ "hubic" +10\ /\ Local\ Disk +\ \ \ \\\ "local" +11\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +13\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +14\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ 8 +Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally. +client_id> +Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally. +client_secret> +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine\ or\ Y\ didn\[aq]t\ work +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +Configure\ this\ as\ a\ team\ drive? +y)\ Yes +n)\ No +y/n>\ n +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ = +client_secret\ = +token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall, or use +manual mode. +.PP +You can then use it like this, +.PP +List directories in top level of your drive +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your drive +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to a drive directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Team drives +.PP +If you want to configure the remote to point to a Google Team Drive then +answer \f[C]y\f[] to the question +\f[C]Configure\ this\ as\ a\ team\ drive?\f[]. +.PP +This will fetch the list of Team Drives from google and allow you to +configure which one you want to use. +You can also type in a team drive ID if you prefer. +.PP +For example: +.IP +.nf +\f[C] +Configure\ this\ as\ a\ team\ drive? +y)\ Yes +n)\ No +y/n>\ y +Fetching\ team\ drive\ list... +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Rclone\ Test +\ \ \ \\\ "xxxxxxxxxxxxxxxxxxxx" +\ 2\ /\ Rclone\ Test\ 2 +\ \ \ \\\ "yyyyyyyyyyyyyyyyyyyy" +\ 3\ /\ Rclone\ Test\ 3 +\ \ \ \\\ "zzzzzzzzzzzzzzzzzzzz" +Enter\ a\ Team\ Drive\ ID>\ 1 +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ = +client_secret\ = +token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null} +team_drive\ =\ xxxxxxxxxxxxxxxxxxxx +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.SS Modified time +.PP +Google drive stores modification times accurate to 1 ms. +.SS Revisions +.PP +Google drive stores revisions of files. +When you upload a change to an existing file to google drive using +rclone it will create a new revision of that file. +.PP +Revisions follow the standard google policy which at time of writing was +.IP \[bu] 2 +They are deleted after 30 days or 100 revisions (whatever comes first). +.IP \[bu] 2 +They do not count towards a user storage quota. +.SS Deleting files +.PP +By default rclone will send all files to the trash when deleting files. +If deleting them permanently is required then use the +\f[C]\-\-drive\-use\-trash=false\f[] flag, or set the equivalent +environment variable. +.SS Emptying trash +.PP +If you wish to empty your trash you can use the +\f[C]rclone\ cleanup\ remote:\f[] command which will permanently delete +all your trashed files. +This command does not take any path arguments. +.SS Specific options +.PP +Here are the command line options specific to this cloud storage system. +.SS \-\-drive\-auth\-owner\-only +.PP +Only consider files owned by the authenticated user. +.SS \-\-drive\-chunk\-size=SIZE +.PP +Upload chunk size. +Must a power of 2 >= 256k. +Default value is 8 MB. +.PP +Making this larger will improve performance, but note that each chunk is +buffered in memory one per transfer. +.PP +Reducing this will reduce memory usage but decrease performance. +.SS \-\-drive\-formats +.PP +Google documents can only be exported from Google drive. +When rclone downloads a Google doc it chooses a format to download +depending upon this setting. +.PP +By default the formats are \f[C]docx,xlsx,pptx,svg\f[] which are a +sensible default for an editable document. +.PP +When choosing a format, rclone runs down the list provided in order and +chooses the first file format the doc can be exported as from the list. +If the file can\[aq]t be exported to a format on the formats list, then +rclone will choose a format from the default list. +.PP +If you prefer an archive copy then you might use +\f[C]\-\-drive\-formats\ pdf\f[], or if you prefer +openoffice/libreoffice formats you might use +\f[C]\-\-drive\-formats\ ods,odt,odp\f[]. +.PP +Note that rclone adds the extension to the google doc, so if it is +calles \f[C]My\ Spreadsheet\f[] on google docs, it will be exported as +\f[C]My\ Spreadsheet.xlsx\f[] or \f[C]My\ Spreadsheet.pdf\f[] etc. +.PP +Here are the possible extensions with their corresponding mime types. +.PP +.TS +tab(@); +lw(9.7n) lw(11.7n) lw(12.6n). +T{ +Extension +T}@T{ +Mime Type +T}@T{ +Description +T} +_ +T{ +csv +T}@T{ +text/csv +T}@T{ +Standard CSV format for Spreadsheets +T} +T{ +doc +T}@T{ +application/msword +T}@T{ +Micosoft Office Document +T} +T{ +docx +T}@T{ +application/vnd.openxmlformats\-officedocument.wordprocessingml.document +T}@T{ +Microsoft Office Document +T} +T{ +epub +T}@T{ +application/epub+zip +T}@T{ +E\-book format +T} +T{ +html +T}@T{ +text/html +T}@T{ +An HTML Document +T} +T{ +jpg +T}@T{ +image/jpeg +T}@T{ +A JPEG Image File +T} +T{ +odp +T}@T{ +application/vnd.oasis.opendocument.presentation +T}@T{ +Openoffice Presentation +T} +T{ +ods +T}@T{ +application/vnd.oasis.opendocument.spreadsheet +T}@T{ +Openoffice Spreadsheet +T} +T{ +ods +T}@T{ +application/x\-vnd.oasis.opendocument.spreadsheet +T}@T{ +Openoffice Spreadsheet +T} +T{ +odt +T}@T{ +application/vnd.oasis.opendocument.text +T}@T{ +Openoffice Document +T} +T{ +pdf +T}@T{ +application/pdf +T}@T{ +Adobe PDF Format +T} +T{ +png +T}@T{ +image/png +T}@T{ +PNG Image Format +T} +T{ +pptx +T}@T{ +application/vnd.openxmlformats\-officedocument.presentationml.presentation +T}@T{ +Microsoft Office Powerpoint +T} +T{ +rtf +T}@T{ +application/rtf +T}@T{ +Rich Text Format +T} +T{ +svg +T}@T{ +image/svg+xml +T}@T{ +Scalable Vector Graphics Format +T} +T{ +tsv +T}@T{ +text/tab\-separated\-values +T}@T{ +Standard TSV format for spreadsheets +T} +T{ +txt +T}@T{ +text/plain +T}@T{ +Plain Text +T} +T{ +xls +T}@T{ +application/vnd.ms\-excel +T}@T{ +Microsoft Office Spreadsheet +T} +T{ +xlsx +T}@T{ +application/vnd.openxmlformats\-officedocument.spreadsheetml.sheet +T}@T{ +Microsoft Office Spreadsheet +T} +T{ +zip +T}@T{ +application/zip +T}@T{ +A ZIP file of HTML, Images CSS +T} +.TE +.SS \-\-drive\-list\-chunk int +.PP +Size of listing chunk 100\-1000. +0 to disable. +(default 1000) +.SS \-\-drive\-shared\-with\-me +.PP +Only show files that are shared with me +.SS \-\-drive\-skip\-gdocs +.PP +Skip google documents in all listings. +If given, gdocs practically become invisible to rclone. +.SS \-\-drive\-trashed\-only +.PP +Only show files that are in the trash. +This will show trashed files in their original directory structure. +.SS \-\-drive\-upload\-cutoff=SIZE +.PP +File size cutoff for switching to chunked upload. +Default is 8 MB. +.SS \-\-drive\-use\-trash +.PP +Controls whether files are sent to the trash or deleted permanently. +Defaults to true, namely sending files to the trash. +Use \f[C]\-\-drive\-use\-trash=false\f[] to delete files permanently +instead. +.SS Limitations +.PP +Drive has quite a lot of rate limiting. +This causes rclone to be limited to transferring about 2 files per +second only. +Individual files may be transferred much faster at 100s of MBytes/s but +lots of small files can take a long time. +.PP +Server side copies are also subject to a separate rate limit. +If you see User rate limit exceeded errors, wait at least 24 hours and +retry. +You can disable server side copies with \f[C]\-\-disable\ copy\f[] to +download and upload the files if you prefer. +.SS Duplicated files +.PP +Sometimes, for no reason I\[aq]ve been able to track down, drive will +duplicate a file that rclone uploads. +Drive unlike all the other remotes can have duplicated files. +.PP +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. +.PP +Use \f[C]rclone\ dedupe\f[] to fix duplicated files. +.PP +Note that this isn\[aq]t just a problem with rclone, even Google Photos +on Android duplicates files on drive sometimes. +.SS Rclone appears to be re\-copying files it shouldn\[aq]t +.PP +There are two possible reasons for rclone to recopy files which +haven\[aq]t changed to Google Drive. +.PP +The first is the duplicated file issue above \- run +\f[C]rclone\ dedupe\f[] and check your logs for duplicate object or +directory messages. +.PP +The second is that sometimes Google reports different sizes for the +Google Docs exports which will cause rclone to re\-download Google Docs +for no apparent reason. +\f[C]\-\-ignore\-size\f[] is a not very satisfactory work\-around for +this if it is causing you a lot of problems. +.SS Google docs downloads sometimes fail with "Failed to copy: read X +bytes expecting Y" +.PP +This is the same problem as above. +Google reports the google doc is one size, but rclone downloads a +different size. +Work\-around with the \f[C]\-\-ignore\-size\f[] flag or wait for rclone +to retry the download which it will. +.SS Making your own client_id +.PP +When you use rclone with Google drive in its default configuration you +are using rclone\[aq]s client_id. +This is shared between all the rclone users. +There is a global rate limit on the number of queries per second that +each client_id can do set by Google. +rclone already has a high quota and I will continue to make sure it is +high enough by contacting Google. +.PP +However you might find you get better performance making your own +client_id if you are a heavy user. +Or you may not depending on exactly how Google have been raising +rclone\[aq]s rate limit. +.PP +Here is how to create your own Google Drive client ID for rclone: +.IP "1." 3 +Log into the Google API Console (https://console.developers.google.com/) +with your Google account. +It doesn\[aq]t matter what Google account you use. +(It need not be the same account as the Google Drive you want to access) +.IP "2." 3 +Select a project or create a new project. +.IP "3." 3 +Under Overview, Google APIs, Google Apps APIs, click "Drive API", then +"Enable". +.IP "4." 3 +Click "Credentials" in the left\-side panel (not "Go to credentials", +which opens the wizard), then "Create credentials", then "OAuth client +ID". +It will prompt you to set the OAuth consent screen product name, if you +haven\[aq]t set one already. +.IP "5." 3 +Choose an application type of "other", and click "Create". +(the default name is fine) +.IP "6." 3 +It will show you a client ID and client secret. +Use these values in rclone config to add a new remote or edit an +existing remote. +.PP +(Thanks to \@balazer on github for these instructions.) +.SS HTTP +.PP +The HTTP remote is a read only remote for reading files of a webserver. +The webserver should provide file listings which rclone will read and +turn into a remote. +This has been tested with common webservers such as Apache/Nginx/Caddy +and will likely work with file listings from most web servers. +(If it doesn\[aq]t then please file an issue, or send a pull request!) +.PP +Paths are specified as \f[C]remote:\f[] or \f[C]remote:path/to/dir\f[]. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 8\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 9\ /\ Hubic +\ \ \ \\\ "hubic" +10\ /\ Local\ Disk +\ \ \ \\\ "local" +11\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +13\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +14\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +15\ /\ http\ Connection +\ \ \ \\\ "http" +Storage>\ http +URL\ of\ http\ host\ to\ connect\ to +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Connect\ to\ example.com +\ \ \ \\\ "https://example.com" +url>\ https://beta.rclone.org +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +url\ =\ https://beta.rclone.org +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +Current\ remotes: + +Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type +====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ==== +remote\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ http + +e)\ Edit\ existing\ remote +n)\ New\ remote +d)\ Delete\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +e/n/d/r/c/s/q>\ q +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all the top level directories +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone\ ls\ remote:directory +\f[] +.fi +.PP +Sync the remote \f[C]directory\f[] to \f[C]/home/local/directory\f[], +deleting any excess files. +.IP +.nf +\f[C] +rclone\ sync\ remote:directory\ /home/local/directory +\f[] +.fi +.SS Read only +.PP +This remote is read only \- you can\[aq]t upload files to an HTTP +server. +.SS Modified time +.PP +Most HTTP servers store time accurate to 1 second. +.SS Checksum +.PP +No checksums are stored. +.SS Usage without a config file +.PP +Note that since only two environment variable need to be set, it is easy +to use without a config file like this. +.IP +.nf +\f[C] +RCLONE_CONFIG_ZZ_TYPE=http\ RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org\ rclone\ lsd\ zz: +\f[] +.fi +.PP +Or if you prefer +.IP +.nf +\f[C] +export\ RCLONE_CONFIG_ZZ_TYPE=http +export\ RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org +rclone\ lsd\ zz: +\f[] +.fi +.SS Hubic +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] +for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:container/path/to/dir\f[]. +.PP +The initial setup for Hubic involves getting a token from Hubic which +you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n)\ New\ remote +s)\ Set\ configuration\ password +n/s>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 7\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 8\ /\ Hubic +\ \ \ \\\ "hubic" +\ 9\ /\ Local\ Disk +\ \ \ \\\ "local" +10\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +12\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +13\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ 8 +Hubic\ Client\ Id\ \-\ leave\ blank\ normally. +client_id> +Hubic\ Client\ Secret\ \-\ leave\ blank\ normally. +client_secret> +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ = +client_secret\ = +token\ =\ {"access_token":"XXXXXX"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Hubic. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +List containers in the top level of your Hubic +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your Hubic +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to an Hubic directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.PP +If you want the directory to be visible in the official \f[I]Hubic +browser\f[], you need to copy your files to the \f[C]default\f[] +directory +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:default/backup +\f[] +.fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. +.SS Modified time +.PP +The modified time is stored as metadata on the object as +\f[C]X\-Object\-Meta\-Mtime\f[] as floating point since the epoch +accurate to 1 ns. +.PP +This is a defacto standard (used in the official python\-swiftclient +amongst others) for storing the modification time for an object. +.PP +Note that Hubic wraps the Swift backend, so most of the properties of +are the same. +.SS Limitations +.PP +This uses the normal OpenStack Swift mechanism to refresh the Swift API +credentials and ignores the expires field returned by the Hubic API. +.PP +The Swift API doesn\[aq]t return a correct MD5SUM for segmented files +(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the +MD5SUM for these. +.SS Microsoft Azure Blob Storage +.PP +Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] +for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:container/path/to/dir\f[]. +.PP +Here is an example of making a Microsoft Azure Blob Storage +configuration. +For a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Box +\ \ \ \\\ "box" +\ 5\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 6\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 7\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 9\ /\ Google\ Drive +\ \ \ \\\ "drive" +10\ /\ Hubic +\ \ \ \\\ "hubic" +11\ /\ Local\ Disk +\ \ \ \\\ "local" +12\ /\ Microsoft\ Azure\ Blob\ Storage +\ \ \ \\\ "azureblob" +13\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +14\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +15\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +16\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +17\ /\ http\ Connection +\ \ \ \\\ "http" +Storage>\ azureblob +Storage\ Account\ Name +account>\ account_name +Storage\ Account\ Key +key>\ base64encodedkey== +Endpoint\ for\ the\ service\ \-\ leave\ blank\ normally. +endpoint>\ +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +account\ =\ account_name +key\ =\ base64encodedkey== +endpoint\ =\ +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +See all containers +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new container +.IP +.nf +\f[C] +rclone\ mkdir\ remote:container +\f[] +.fi +.PP +List the contents of a container +.IP +.nf +\f[C] +rclone\ ls\ remote:container +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote container, deleting +any excess files in the container. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:container +\f[] +.fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. +.SS Modified time +.PP +The modified time is stored as metadata on the object with the +\f[C]mtime\f[] key. +It is stored using RFC3339 Format time with nanosecond precision. +The metadata is supplied during directory listings so there is no +overhead to using it. +.SS Hashes +.PP +MD5 hashes are stored with blobs. +However blobs that were uploaded in chunks only have an MD5 if the +source remote was capable of MD5 hashes, eg the local disk. +.SS Multipart uploads +.PP +Rclone supports multipart uploads with Azure Blob storage. +Files bigger than 256MB will be uploaded using chunked upload by +default. +.PP +The files will be uploaded in parallel in 4MB chunks (by default). +Note that these chunks are buffered in memory and there may be up to +\f[C]\-\-transfers\f[] of them being uploaded at once. +.PP +Files can\[aq]t be split into more than 50,000 chunks so by default, so +the largest file that can be uploaded with 4MB chunk size is 195GB. +Above this rclone will double the chunk size until it creates less than +50,000 chunks. +By default this will mean a maximum file size of 3.2TB can be uploaded. +This can be raised to 5TB using +\f[C]\-\-azureblob\-chunk\-size\ 100M\f[]. +.PP +Note that rclone doesn\[aq]t commit the block list until the end of the +upload which means that there is a limit of 9.5TB of multipart uploads +in progress as Azure won\[aq]t allow more than that amount of +uncommitted blocks. +.SS Specific options +.PP +Here are the command line options specific to this cloud storage system. +.SS \-\-azureblob\-upload\-cutoff=SIZE +.PP +Cutoff for switching to chunked upload \- must be <= 256MB. +The default is 256MB. +.SS \-\-azureblob\-chunk\-size=SIZE +.PP +Upload chunk size. +Default 4MB. +Note that this is stored in memory and there may be up to +\f[C]\-\-transfers\f[] chunks stored at once in memory. +This can be at most 100MB. +.SS Limitations +.PP +MD5 sums are only uploaded with chunked files if the source has an MD5 +sum. +This will always be the case for a local to azure copy. +.SS Microsoft OneDrive +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for OneDrive involves getting a token from Microsoft +which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +n/s>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 7\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 8\ /\ Hubic +\ \ \ \\\ "hubic" +\ 9\ /\ Local\ Disk +\ \ \ \\\ "local" +10\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +12\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +13\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ 10 +Microsoft\ App\ Client\ Id\ \-\ leave\ blank\ normally. +client_id> +Microsoft\ App\ Client\ Secret\ \-\ leave\ blank\ normally. +client_secret> +Remote\ config +Choose\ OneDrive\ account\ type? +\ *\ Say\ b\ for\ a\ OneDrive\ business\ account +\ *\ Say\ p\ for\ a\ personal\ OneDrive\ account +b)\ Business +p)\ Personal +b/p>\ p +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ = +client_secret\ = +token\ =\ {"access_token":"XXXXXX"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Microsoft. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +List directories in top level of your OneDrive +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your OneDrive +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to an OneDrive directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS OneDrive for Business +.PP +There is additional support for OneDrive for Business. +Select "b" when ask +.IP +.nf +\f[C] +Choose\ OneDrive\ account\ type? +\ *\ Say\ b\ for\ a\ OneDrive\ business\ account +\ *\ Say\ p\ for\ a\ personal\ OneDrive\ account +b)\ Business +p)\ Personal +b/p>\ +\f[] +.fi +.PP +After that rclone requires an authentication of your account. +The application will first authenticate your account, then query the +OneDrive resource URL and do a second (silent) authentication for this +resource URL. +.SS Modified time and hashes +.PP +OneDrive allows modification times to be set on objects accurate to 1 +second. +These will be used to detect whether objects need syncing or not. +.PP +One drive supports SHA1 type hashes, so you can use +\f[C]\-\-checksum\f[] flag. +.SS Deleting files +.PP +Any files you delete with rclone will end up in the trash. +Microsoft doesn\[aq]t provide an API to permanently delete files, nor to +empty the trash, so you will have to do that with one of Microsoft\[aq]s +apps or via the OneDrive website. +.SS Specific options +.PP +Here are the command line options specific to this cloud storage system. +.SS \-\-onedrive\-chunk\-size=SIZE +.PP +Above this size files will be chunked \- must be multiple of 320k. +The default is 10MB. +Note that the chunks will be buffered into memory. +.SS \-\-onedrive\-upload\-cutoff=SIZE +.PP +Cutoff for switching to chunked upload \- must be <= 100MB. +The default is 10MB. +.SS Limitations +.PP +Note that OneDrive is case insensitive so you can\[aq]t have a file +called "Hello.doc" and one called "hello.doc". +.PP +There are quite a few characters that can\[aq]t be in OneDrive file +names. +These can\[aq]t occur on Windows platforms, but on non\-Windows +platforms they are common. +Rclone will map these names to and from an identical looking unicode +equivalent. +For example if a file has a \f[C]?\f[] in it will be mapped to +\f[C]?\f[] instead. +.PP +The largest allowed file size is 10GiB (10,737,418,240 bytes). +.SS QingStor +.PP +Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for +the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:bucket/path/to/dir\f[]. +.PP +Here is an example of making an QingStor configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/r/c/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 8\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 9\ /\ Hubic +\ \ \ \\\ "hubic" +10\ /\ Local\ Disk +\ \ \ \\\ "local" +11\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +13\ /\ QingStor\ Object\ Storage +\ \ \ \\\ "qingstor" +14\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +15\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ 13 +Get\ QingStor\ credentials\ from\ runtime.\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Enter\ QingStor\ credentials\ in\ the\ next\ step +\ \ \ \\\ "false" +\ 2\ /\ Get\ QingStor\ credentials\ from\ the\ environment\ (env\ vars\ or\ IAM) +\ \ \ \\\ "true" +env_auth>\ 1 +QingStor\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. +access_key_id>\ access_key +QingStor\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials. +secret_access_key>\ secret_key +Enter\ a\ endpoint\ URL\ to\ connection\ QingStor\ API. +Leave\ blank\ will\ use\ the\ default\ value\ "https://qingstor.com:443" +endpoint> +Zone\ connect\ to.\ Default\ is\ "pek3a". +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ \ \ /\ The\ Beijing\ (China)\ Three\ Zone +\ 1\ |\ Needs\ location\ constraint\ pek3a. +\ \ \ \\\ "pek3a" +\ \ \ /\ The\ Shanghai\ (China)\ First\ Zone +\ 2\ |\ Needs\ location\ constraint\ sh1a. +\ \ \ \\\ "sh1a" +zone>\ 1 +Number\ of\ connnection\ retry. +Leave\ blank\ will\ use\ the\ default\ value\ "3". +connection_retries> +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +env_auth\ =\ false +access_key_id\ =\ access_key +secret_access_key\ =\ secret_key +endpoint\ = +zone\ =\ pek3a +connection_retries\ = +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all buckets +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new bucket +.IP +.nf +\f[C] +rclone\ mkdir\ remote:bucket +\f[] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone\ ls\ remote:bucket +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any +excess files in the bucket. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:bucket +\f[] +.fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. +.SS Multipart uploads +.PP +rclone supports multipart uploads with QingStor which means that it can +upload files bigger than 5GB. +Note that files uploaded with multipart upload don\[aq]t have an MD5SUM. +.SS Buckets and Zone +.PP +With QingStor you can list buckets (\f[C]rclone\ lsd\f[]) using any +zone, but you can only access the content of a bucket from the zone it +was created in. +If you attempt to access a bucket from the wrong zone, you will get an +error, +\f[C]incorrect\ zone,\ the\ bucket\ is\ not\ in\ \[aq]XXX\[aq]\ zone\f[]. +.SS Authentication +.PP +There are two ways to supply \f[C]rclone\f[] with a set of QingStor +credentials. +In order of precedence: +.IP \[bu] 2 +Directly in the rclone configuration file (as configured by +\f[C]rclone\ config\f[]) +.IP \[bu] 2 +set \f[C]access_key_id\f[] and \f[C]secret_access_key\f[] +.IP \[bu] 2 +Runtime configuration: +.IP \[bu] 2 +set \f[C]env_auth\f[] to \f[C]true\f[] in the config file +.IP \[bu] 2 +Exporting the following environment variables before running +\f[C]rclone\f[] +.RS 2 +.IP \[bu] 2 +Access Key ID: \f[C]QS_ACCESS_KEY_ID\f[] or \f[C]QS_ACCESS_KEY\f[] +.IP \[bu] 2 +Secret Access Key: \f[C]QS_SECRET_ACCESS_KEY\f[] or +\f[C]QS_SECRET_KEY\f[] +.RE +.SS Swift +.PP +Swift refers to Openstack Object +Storage (https://docs.openstack.org/swift/latest/). +Commercial implementations of that being: +.IP \[bu] 2 +Rackspace Cloud Files (https://www.rackspace.com/cloud/files/) +.IP \[bu] 2 +Memset Memstore (https://www.memset.com/cloud/storage/) +.IP \[bu] 2 +OVH Object +Storage (https://www.ovh.co.uk/public-cloud/storage/object-storage/) +.IP \[bu] 2 +Oracle Cloud Storage (https://cloud.oracle.com/storage-opc) +.PP +Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] +for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:container/path/to/dir\f[]. +.PP +Here is an example of making a swift configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Box +\ \ \ \\\ "box" +\ 5\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 6\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 7\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 9\ /\ Google\ Drive +\ \ \ \\\ "drive" +10\ /\ Hubic +\ \ \ \\\ "hubic" +11\ /\ Local\ Disk +\ \ \ \\\ "local" +12\ /\ Microsoft\ Azure\ Blob\ Storage +\ \ \ \\\ "azureblob" +13\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +14\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +15\ /\ QingClound\ Object\ Storage +\ \ \ \\\ "qingstor" +16\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +17\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +18\ /\ http\ Connection +\ \ \ \\\ "http" +Storage>\ swift +Get\ swift\ credentials\ from\ environment\ variables\ in\ standard\ OpenStack\ form. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Enter\ swift\ credentials\ in\ the\ next\ step +\ \ \ \\\ "false" +\ 2\ /\ Get\ swift\ credentials\ from\ environment\ vars.\ Leave\ other\ fields\ blank\ if\ using\ this. +\ \ \ \\\ "true" +env_auth>\ 1 +User\ name\ to\ log\ in. +user>\ user_name +API\ key\ or\ password. +key>\ password_or_api_key +Authentication\ URL\ for\ server. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Rackspace\ US +\ \ \ \\\ "https://auth.api.rackspacecloud.com/v1.0" +\ 2\ /\ Rackspace\ UK +\ \ \ \\\ "https://lon.auth.api.rackspacecloud.com/v1.0" +\ 3\ /\ Rackspace\ v2 +\ \ \ \\\ "https://identity.api.rackspacecloud.com/v2.0" +\ 4\ /\ Memset\ Memstore\ UK +\ \ \ \\\ "https://auth.storage.memset.com/v1.0" +\ 5\ /\ Memset\ Memstore\ UK\ v2 +\ \ \ \\\ "https://auth.storage.memset.com/v2.0" +\ 6\ /\ OVH +\ \ \ \\\ "https://auth.cloud.ovh.net/v2.0" +auth>\ 1 +User\ domain\ \-\ optional\ (v3\ auth) +domain>\ Default +Tenant\ name\ \-\ optional\ for\ v1\ auth,\ required\ otherwise +tenant>\ tenant_name +Tenant\ domain\ \-\ optional\ (v3\ auth) +tenant_domain> +Region\ name\ \-\ optional +region> +Storage\ URL\ \-\ optional +storage_url> +AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version +auth_version> +Endpoint\ type\ to\ choose\ from\ the\ service\ catalogue +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Public\ (default,\ choose\ this\ if\ not\ sure) +\ \ \ \\\ "public" +\ 2\ /\ Internal\ (use\ internal\ service\ net) +\ \ \ \\\ "internal" +\ 3\ /\ Admin +\ \ \ \\\ "admin" +endpoint_type> +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +env_auth\ =\ false +user\ =\ user_name +key\ =\ password_or_api_key +auth\ =\ https://auth.api.rackspacecloud.com/v1.0 +domain\ =\ Default +tenant\ = +tenant_domain\ = +region\ = +storage_url\ = +auth_version\ = +endpoint_type\ = +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all containers +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new container +.IP +.nf +\f[C] +rclone\ mkdir\ remote:container +\f[] +.fi +.PP +List the contents of a container +.IP +.nf +\f[C] +rclone\ ls\ remote:container +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote container, deleting +any excess files in the container. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:container +\f[] +.fi +.SS Configuration from an Openstack credentials file +.PP +An Opentstack credentials file typically looks something something like +this (without the comments) +.IP +.nf +\f[C] +export\ OS_AUTH_URL=https://a.provider.net/v2.0 +export\ OS_TENANT_ID=ffffffffffffffffffffffffffffffff +export\ OS_TENANT_NAME="1234567890123456" +export\ OS_USERNAME="123abc567xy" +echo\ "Please\ enter\ your\ OpenStack\ Password:\ " +read\ \-sr\ OS_PASSWORD_INPUT +export\ OS_PASSWORD=$OS_PASSWORD_INPUT +export\ OS_REGION_NAME="SBG1" +if\ [\ \-z\ "$OS_REGION_NAME"\ ];\ then\ unset\ OS_REGION_NAME;\ fi +\f[] +.fi +.PP +The config file needs to look something like this where +\f[C]$OS_USERNAME\f[] represents the value of the \f[C]OS_USERNAME\f[] +variable \- \f[C]123abc567xy\f[] in the example above. +.IP +.nf +\f[C] +[remote] +type\ =\ swift +user\ =\ $OS_USERNAME +key\ =\ $OS_PASSWORD +auth\ =\ $OS_AUTH_URL +tenant\ =\ $OS_TENANT_NAME +\f[] +.fi +.PP +Note that you may (or may not) need to set \f[C]region\f[] too \- try +without first. +.SS Configuration from the environment +.PP +If you prefer you can configure rclone to use swift using a standard set +of OpenStack environment variables. +.PP +When you run through the config, make sure you choose \f[C]true\f[] for +\f[C]env_auth\f[] and leave everything else blank. +.PP +rclone will then set any empty config parameters from the enviroment +using standard OpenStack environment variables. +There is a list of the +variables (https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) +in the docs for the swift library. +.SS Using rclone without a config file +.PP +You can use rclone with swift without a config file, if desired, like +this: +.IP +.nf +\f[C] +source\ openstack\-credentials\-file +export\ RCLONE_CONFIG_MYREMOTE_TYPE=swift +export\ RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true +rclone\ lsd\ myremote: +\f[] +.fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. +.SS Specific options +.PP +Here are the command line options specific to this cloud storage system. +.SS \-\-swift\-chunk\-size=SIZE +.PP +Above this size files will be chunked into a _segments container. +The default for this is 5GB which is its maximum value. +.SS Modified time +.PP +The modified time is stored as metadata on the object as +\f[C]X\-Object\-Meta\-Mtime\f[] as floating point since the epoch +accurate to 1 ns. +.PP +This is a defacto standard (used in the official python\-swiftclient +amongst others) for storing the modification time for an object. +.SS Limitations +.PP +The Swift API doesn\[aq]t return a correct MD5SUM for segmented files +(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the +MD5SUM for these. +.SS Troubleshooting +.SS Rclone gives Failed to create file system for "remote:": Bad Request +.PP +Due to an oddity of the underlying swift library, it gives a "Bad +Request" error rather than a more sensible error when the authentication +fails for Swift. +.PP +So this most likely means your username / password is wrong. +You can investigate further with the \f[C]\-\-dump\-bodies\f[] flag. +.PP +This may also be caused by specifying the region when you shouldn\[aq]t +have (eg OVH). +.SS Rclone gives Failed to create file system: Response didn\[aq]t have +storage storage url and auth token +.PP +This is most likely caused by forgetting to specify your tenant when +setting up a swift remote. +.SS SFTP +.PP +SFTP is the Secure (or SSH) File Transfer +Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). +.PP +It runs over SSH v2 and is standard with most modern SSH installations. +.PP +Paths are specified as \f[C]remote:path\f[]. +If the path does not begin with a \f[C]/\f[] it is relative to the home +directory of the user. +An empty path \f[C]remote:\f[] refers to the users home directory. +.PP +Here is an example of making a SFTP configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ FTP\ Connection +\ \ \ \\\ "ftp" +\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 8\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 9\ /\ Hubic +\ \ \ \\\ "hubic" +10\ /\ Local\ Disk +\ \ \ \\\ "local" +11\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +13\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +14\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +15\ /\ http\ Connection +\ \ \ \\\ "http" +Storage>\ sftp +SSH\ host\ to\ connect\ to +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Connect\ to\ example.com +\ \ \ \\\ "example.com" +host>\ example.com +SSH\ username,\ leave\ blank\ for\ current\ username,\ ncw +user>\ sftpuser +SSH\ port,\ leave\ blank\ to\ use\ default\ (22) +port>\ +SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent. +y)\ Yes\ type\ in\ my\ own\ password +g)\ Generate\ random\ password +n)\ No\ leave\ this\ optional\ password\ blank +y/g/n>\ n +Path\ to\ unencrypted\ PEM\-encoded\ private\ key\ file,\ leave\ blank\ to\ use\ ssh\-agent. +key_file>\ +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +host\ =\ example.com +user\ =\ sftpuser +port\ =\ +pass\ =\ +key_file\ =\ +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all directories in the home directory +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new directory +.IP +.nf +\f[C] +rclone\ mkdir\ remote:path/to/directory +\f[] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone\ ls\ remote:path/to/directory +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote directory, deleting +any excess files in the directory. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:directory +\f[] +.fi +.SS SSH Authentication +.PP +The SFTP remote supports 3 authentication methods +.IP \[bu] 2 +Password +.IP \[bu] 2 +Key file +.IP \[bu] 2 +ssh\-agent +.PP +Key files should be unencrypted PEM\-encoded private key files. +For instance \f[C]/home/$USER/.ssh/id_rsa\f[]. +.PP +If you don\[aq]t specify \f[C]pass\f[] or \f[C]key_file\f[] then it will +attempt to contact an ssh\-agent. +.SS ssh\-agent on macOS +.PP +Note that there seem to be various problems with using an ssh\-agent on +macOS due to recent changes in the OS. +The most effective work\-around seems to be to start an ssh\-agent in +each session, eg +.IP +.nf +\f[C] +eval\ `ssh\-agent\ \-s`\ &&\ ssh\-add\ \-A +\f[] +.fi +.PP +And then at the end of the session +.IP +.nf +\f[C] +eval\ `ssh\-agent\ \-k` +\f[] +.fi +.PP +These commands can be used in scripts of course. +.SS Modified time +.PP +Modified times are stored on the server to 1 second precision. +.PP +Modified times are used in syncing and are fully supported. +.SS Limitations +.PP +SFTP supports checksums if the same login has shell access and +\f[C]md5sum\f[] or \f[C]sha1sum\f[] as well as \f[C]echo\f[] are in the +remote\[aq]s PATH. +.PP +The only ssh agent supported under Windows is Putty\[aq]s pagent. +.PP +SFTP isn\[aq]t supported under plan9 until this +issue (https://github.com/pkg/sftp/issues/156) is fixed. +.PP +Note that since SFTP isn\[aq]t HTTP based the following flags don\[aq]t +work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[], +\f[C]\-\-dump\-auth\f[] +.PP +Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but +\f[C]\-\-contimeout\f[] is). +.SS Yandex Disk +.PP +Yandex Disk (https://disk.yandex.com) is a cloud storage solution +created by Yandex (https://yandex.com). +.PP +Yandex paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +Here is an example of making a yandex configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +n/s>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ 1\ /\ Amazon\ Drive +\ \ \ \\\ "amazon\ cloud\ drive" +\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +\ \ \ \\\ "s3" +\ 3\ /\ Backblaze\ B2 +\ \ \ \\\ "b2" +\ 4\ /\ Dropbox +\ \ \ \\\ "dropbox" +\ 5\ /\ Encrypt/Decrypt\ a\ remote +\ \ \ \\\ "crypt" +\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +\ \ \ \\\ "google\ cloud\ storage" +\ 7\ /\ Google\ Drive +\ \ \ \\\ "drive" +\ 8\ /\ Hubic +\ \ \ \\\ "hubic" +\ 9\ /\ Local\ Disk +\ \ \ \\\ "local" +10\ /\ Microsoft\ OneDrive +\ \ \ \\\ "onedrive" +11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +\ \ \ \\\ "swift" +12\ /\ SSH/SFTP\ Connection +\ \ \ \\\ "sftp" +13\ /\ Yandex\ Disk +\ \ \ \\\ "yandex" +Storage>\ 13 +Yandex\ Client\ Id\ \-\ leave\ blank\ normally. +client_id> +Yandex\ Client\ Secret\ \-\ leave\ blank\ normally. +client_secret> +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ = +client_secret\ = +token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016\-12\-29T12:27:11.362788025Z"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Yandex Disk. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +See top level directories +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new directory +.IP +.nf +\f[C] +rclone\ mkdir\ remote:directory +\f[] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone\ ls\ remote:directory +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote path, deleting any +excess files in the path. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:directory +\f[] +.fi +.SS \-\-fast\-list +.PP +This remote supports \f[C]\-\-fast\-list\f[] which allows you to use +fewer transactions in exchange for more memory. +See the rclone docs (/docs/#fast-list) for more details. +.SS Modified time +.PP +Modified times are supported and are stored accurate to 1 ns in custom +metadata called \f[C]rclone_modified\f[] in RFC3339 with nanoseconds +format. +.SS MD5 checksums +.PP +MD5 checksums are natively supported by Yandex Disk. +.SS Emptying Trash +.PP +If you wish to empty your trash you can use the +\f[C]rclone\ cleanup\ remote:\f[] command which will permanently delete +all your trashed files. +This command does not take any path arguments. .SS Local Filesystem .PP Local paths are specified as normal filesystem paths, eg @@ -7330,20 +8733,11 @@ $\ rclone\ \-L\ ls\ /tmp/a \ \ \ \ \ \ \ \ 6\ b/one \f[] .fi -.SS \-\-no\-local\-unicode\-normalization +.SS \-\-local\-no\-unicode\-normalization .PP -By default rclone normalizes (NFC) the unicode representation of -filenames and directories. -This flag disables that normalization and uses the same representation -as the local filesystem. -.PP -This can be useful if you need to retain the local unicode -representation and you are using a cloud provider which supports -unnormalized names (e.g. -S3 or ACD). -.PP -This should also work with any provider if you are using crypt and have -file name encryption (the default) or obfuscation turned on. +This flag is deprecated now. +Rclone no longer normalizes unicode file names, but it compares them +with unicode normalization in the sync routine instead. .SS \-\-one\-file\-system, \-x .PP This tells rclone to stay in the filesystem specified by the root and @@ -7392,8 +8786,153 @@ as being on the same filesystem. \f[B]NB\f[] This flag is only available on Unix based systems. On systems where it isn\[aq]t supported (eg Windows) it will not appear as an valid flag. +.SS \-\-skip\-links +.PP +This flag disables warning messages on skipped symlinks or junction +points, as you explicitly acknowledge that they should be skipped. .SS Changelog .IP \[bu] 2 +v1.38 \- 2017\-09\-30 +.RS 2 +.IP \[bu] 2 +New backends +.IP \[bu] 2 +Azure Blob Storage (thanks Andrei Dragomir) +.IP \[bu] 2 +Box +.IP \[bu] 2 +Onedrive for Business (thanks Oliver Heyme) +.IP \[bu] 2 +QingStor from QingCloud (thanks wuyu) +.IP \[bu] 2 +New commands +.IP \[bu] 2 +\f[C]rcat\f[] \- read from standard input and stream upload +.IP \[bu] 2 +\f[C]tree\f[] \- shows a nicely formatted recursive listing +.IP \[bu] 2 +\f[C]cryptdecode\f[] \- decode crypted file names (thanks ishuah) +.IP \[bu] 2 +\f[C]config\ show\f[] \- print the config file +.IP \[bu] 2 +\f[C]config\ file\f[] \- print the config file location +.IP \[bu] 2 +New Features +.IP \[bu] 2 +Empty directories are deleted on \f[C]sync\f[] +.IP \[bu] 2 +\f[C]dedupe\f[] \- implement merging of duplicate directories +.IP \[bu] 2 +\f[C]check\f[] and \f[C]cryptcheck\f[] made more consistent and use less +memory +.IP \[bu] 2 +\f[C]cleanup\f[] for remaining remotes (thanks ishuah) +.IP \[bu] 2 +\f[C]\-\-immutable\f[] for ensuring that files don\[aq]t change (thanks +Jacob McNamee) +.IP \[bu] 2 +\f[C]\-\-user\-agent\f[] option (thanks Alex McGrath Kraak) +.IP \[bu] 2 +\f[C]\-\-disable\f[] flag to disable optional features +.IP \[bu] 2 +\f[C]\-\-bind\f[] flag for choosing the local addr on outgoing +connections +.IP \[bu] 2 +Support for zsh auto\-completion (thanks bpicode) +.IP \[bu] 2 +Stop normalizing file names but do a normalized compare in \f[C]sync\f[] +.IP \[bu] 2 +Compile +.IP \[bu] 2 +Update to using go1.9 as the default go version +.IP \[bu] 2 +Remove snapd build due to maintenance problems +.IP \[bu] 2 +Bug Fixes +.IP \[bu] 2 +Improve retriable error detection which makes multipart uploads better +.IP \[bu] 2 +Make \f[C]check\f[] obey \f[C]\-\-ignore\-size\f[] +.IP \[bu] 2 +Fix bwlimit toggle in conjunction with schedules (thanks cbruegg) +.IP \[bu] 2 +\f[C]config\f[] ensures newly written config is on the same mount +.IP \[bu] 2 +Local +.IP \[bu] 2 +Revert to copy when moving file across file system boundaries +.IP \[bu] 2 +\f[C]\-\-skip\-links\f[] to suppress symlink warnings (thanks Zhiming +Wang) +.IP \[bu] 2 +Mount +.IP \[bu] 2 +Re\-use \f[C]rcat\f[] internals to support uploads from all remotes +.IP \[bu] 2 +Dropbox +.IP \[bu] 2 +Fix "entry doesn\[aq]t belong in directory" error +.IP \[bu] 2 +Stop using deprecated API methods +.IP \[bu] 2 +Swift +.IP \[bu] 2 +Fix server side copy to empty container with \f[C]\-\-fast\-list\f[] +.IP \[bu] 2 +Google Drive +.IP \[bu] 2 +Change the default for \f[C]\-\-drive\-use\-trash\f[] to \f[C]true\f[] +.IP \[bu] 2 +S3 +.IP \[bu] 2 +Set session token when using STS (thanks Girish Ramakrishnan) +.IP \[bu] 2 +Glacier docs and error messages (thanks Jan Varho) +.IP \[bu] 2 +Read 1000 (not 1024) items in dir listings to fix Wasabi +.IP \[bu] 2 +Backblaze B2 +.IP \[bu] 2 +Fix SHA1 mismatch when downloading files with no SHA1 +.IP \[bu] 2 +Calculate missing hashes on the fly instead of spooling +.IP \[bu] 2 +\f[C]\-\-b2\-hard\-delete\f[] to permanently delete (not hide) files +(thanks John Papandriopoulos) +.IP \[bu] 2 +Hubic +.IP \[bu] 2 +Fix creating containers \- no longer have to use the \f[C]default\f[] +container +.IP \[bu] 2 +Swift +.IP \[bu] 2 +Optionally configure from a standard set of OpenStack environment vars +.IP \[bu] 2 +Add \f[C]endpoint_type\f[] config +.IP \[bu] 2 +Google Cloud Storage +.IP \[bu] 2 +Fix bucket creation to work with limited permission users +.IP \[bu] 2 +SFTP +.IP \[bu] 2 +Implement connection pooling for multiple ssh connections +.IP \[bu] 2 +Limit new connections per second +.IP \[bu] 2 +Add support for MD5 and SHA1 hashes where available (thanks Christian +Brüggemann) +.IP \[bu] 2 +HTTP +.IP \[bu] 2 +Fix URL encoding issues +.IP \[bu] 2 +Fix directories with \f[C]:\f[] in +.IP \[bu] 2 +Fix panic with URL encoded content +.RE +.IP \[bu] 2 v1.37 \- 2017\-07\-22 .RS 2 .IP \[bu] 2 @@ -7423,7 +8962,7 @@ This uses less transactions (important if you pay for them) .IP \[bu] 2 This may or may not be quicker .IP \[bu] 2 -This will user more memory as it has to hold the listing in memory +This will use more memory as it has to hold the listing in memory .IP \[bu] 2 \-\-old\-sync\-method deprecated \- the remaining uses are covered by \-\-fast\-list @@ -9235,6 +10774,23 @@ hasn\[aq]t got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions\[aq] file formats +.SS tcp lookup some.domain.com no such host +.PP +This happens when rclone cannot resolve a domain. +Please check that your DNS setup is generally working, e.g. +.IP +.nf +\f[C] +#\ both\ should\ print\ a\ long\ list\ of\ possible\ IP\ addresses +dig\ www.googleapis.com\ \ \ \ \ \ \ \ \ \ #\ resolve\ using\ your\ default\ DNS +dig\ www.googleapis.com\ \@8.8.8.8\ #\ resolve\ with\ Google\[aq]s\ DNS\ server +\f[] +.fi +.PP +If you are using \f[C]systemd\-resolved\f[] (default on Arch Linux), +ensure it is at version 233 or higher. +Previous releases contain a bug which causes not all domains to be +resolved properly. .SS License .PP This is free software under the terms of MIT the license (check the @@ -9424,6 +10980,38 @@ sainaen gdm85 .IP \[bu] 2 Yaroslav Halchenko +.IP \[bu] 2 +John Papandriopoulos +.IP \[bu] 2 +Zhiming Wang +.IP \[bu] 2 +Andy Pilate +.IP \[bu] 2 +Oliver Heyme +.IP \[bu] 2 +wuyu +.IP \[bu] 2 +Andrei Dragomir +.IP \[bu] 2 +Christian Brüggemann +.IP \[bu] 2 +Alex McGrath Kraak +.IP \[bu] 2 +bpicode +.IP \[bu] 2 +Daniel Jagszent +.IP \[bu] 2 +Josiah White +.IP \[bu] 2 +Ishuah Kariuki +.IP \[bu] 2 +Jan Varho +.IP \[bu] 2 +Girish Ramakrishnan +.IP \[bu] 2 +LingMan +.IP \[bu] 2 +Jacob McNamee .SH Contact the rclone project .SS Forum .PP