From bd0227450e47f55558344b0a152d7b301792cd47 Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Sat, 18 Jun 2016 16:29:53 +0100 Subject: [PATCH] Version v1.30 --- MANUAL.html | 192 +++++++++++++++-- MANUAL.md | 284 +++++++++++++++++++++++-- MANUAL.txt | 421 ++++++++++++++++++++++++++++++++------ docs/content/changelog.md | 38 +++- docs/content/downloads.md | 42 ++-- fs/version.go | 2 +- rclone.1 | 366 +++++++++++++++++++++++++++++++-- 7 files changed, 1194 insertions(+), 151 deletions(-) diff --git a/MANUAL.html b/MANUAL.html index e463e9875..9058782f4 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@

Rclone

Logo

@@ -116,11 +116,11 @@ destpath/sourcepath/two.txt

If dest:path doesn't exist, it is created and the source:path contents go there.

move source:path dest:path

Moves the source to the destination.

-

If there are no filters in use this is equivalent to a copy followed by a purge, but may using server side operations to speed it up if possible.

-

If filters are in use then it is equivalent to a copy followed by delete, followed by an rmdir (which only removes the directory if empty). The individual file moves will be moved with srver side operations if possible.

+

If there are no filters in use this is equivalent to a copy followed by a purge, but may use server side operations to speed it up if possible.

+

If filters are in use then it is equivalent to a copy followed by delete, followed by an rmdir (which only removes the directory if empty). The individual file moves will be moved with server side operations if possible.

Important: Since this can cause data loss, test first with the --dry-run flag.

rclone ls remote:path

-

List all the objects in the the path with size and path.

+

List all the objects in the path with size and path.

rclone lsd remote:path

List all directories/containers/buckets in the the path.

rclone lsl remote:path

@@ -209,6 +209,20 @@ two-3.txt: renamed from: two.txt

Enter an interactive configuration session.

rclone help

Prints help on rclone commands and options.

+

Quoting and the shell

+

When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.

+

Here are some gotchas which may help users unfamiliar with the shell rules

+

Linux / OSX

+

If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc) then you must quote them. Use single quotes ' by default.

+
rclone copy 'Important files?' remote:backup
+

If you want to send a ' you will need to use ", eg

+
rclone copy "O'Reilly Reviews" remote:backup
+

The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.

+

Windows

+

If your names have spaces in you need to put them in ", eg

+
rclone copy "E:\folder name\folder name\folder name" remote:backup
+

If you are using the root directory on its own then don't quote it (see #464 for why), eg

+
rclone copy E:\ remote:backup

Server Side Copy

Drive, S3, Dropbox, Swift and Google Cloud Storage support server side copy.

This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.

@@ -224,9 +238,9 @@ rclone sync /path/to/files remote:current-backup

Options

Rclone has a number of options to control its behaviour.

Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

-

Options which use SIZE use kByte by default. However a suffix of k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 2**10, 2**20, 2**30 respectively.

+

Options which use SIZE use kByte by default. However a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

--bwlimit=SIZE

-

Bandwidth limit in kBytes/s, or use suffix k|M|G. The default is 0 which means to not limit bandwidth.

+

Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is 0 which means to not limit bandwidth.

For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M

This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.

--checkers=N

@@ -250,16 +264,26 @@ rclone sync /path/to/files remote:current-backup

--ignore-existing

Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.

While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.

+

--ignore-size

+

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum is set then it only checks the checksum.

+

It will also cause rclone to skip verifying the sizes are the same after transfer.

+

This can be useful for transferring files to and from onedrive which occasionally misreports the size of image files (see #399 for more info).

-I, --ignore-times

Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.

Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).

--log-file=FILE

-

Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag.

+

Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

--low-level-retries NUMBER

This controls the number of low level retries rclone does.

A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag.

This shouldn't need to be changed from the default in normal operations, however if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker.

Disable low level retries with --low-level-retries 1.

+

--max-depth=N

+

This modifies the recursion depth for all the commands except purge.

+

So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.

+

For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag.

+

You can use this command to disable recursion (with --max-depth 1).

+

Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.

--modify-window=TIME

When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.

The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.

@@ -276,7 +300,6 @@ rclone sync /path/to/files remote:current-backup

--size-only

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.

This can be useful transferring files from dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.

-

When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.

--stats=TIME

Rclone will print stats at regular intervals to show its progress.

This sets the interval.

@@ -323,9 +346,9 @@ a) Add Password q) Quit to main menu a/q> a Enter NEW configuration password: -password> +password: Confirm NEW password: -password> +password: Password set Your configuration is encrypted. c) Change Password @@ -334,10 +357,10 @@ q) Quit to main menu c/u/q>

Your configuration is now encrypted, and every time you start rclone you will now be asked for the password. In the same menu you can change the password or completely remove encryption from your configuration.

There is no way to recover the configuration if you lose your password.

-

rclone uses nacl secretbox which in term uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.

-

While this provides very good security, we do not recommend storing your encrypted rclone configuration in public, if it contains sensitive information, maybe except if you use a very strong password.

+

rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.

+

While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.

If it is safe in your environment, you can set the RCLONE_CONFIG_PASS environment variable to contain your password, in which case it will be used for decrypting the configuration.

-

If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password, if if RCLONE_CONFIG_PASS doesn't contain a valid password.

+

If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password.

Developer options

These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question.

--cpuprofile=FILE

@@ -372,6 +395,13 @@ c/u/q>
  • --dump-filters
  • See the filtering section.

    +

    Logging

    +

    rclone has 3 levels of logging, Error, Info and Debug.

    +

    By default rclone logs Error and Info to standard error and Debug to standard output. This means you can redirect standard output and standard error to different places.

    +

    By default rclone will produce Error and Info level messages.

    +

    If you use the -q flag, rclone will only produce Error messages.

    +

    If you use the -v flag, rclone will produce Error, Info and Debug messages.

    +

    If you use the --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with standard error to FILE.

    Exit Code

    If any errors occurred during the command, rclone will set a non zero exit code. This allows scripts to detect when rclone operations have failed.

    Configuring rclone on a remote / headless machine

    @@ -428,6 +458,7 @@ y/e/d>

    Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size.

    The filters are applied for the copy, sync, move, ls, lsl, md5sum, sha1sum, size, delete and check operations. Note that purge does not obey the filters.

    Each path as it passes through rclone is matched against the include and exclude rules like --include, --exclude, --include-from, --exclude-from, --filter, or --filter-from. The simplest way to try them out is using the ls command, or --dry-run together with -v.

    +

    Important Due to limitations of the command line parser you can only use any of these options once - if you duplicate them then rclone will use the last one only.

    Patterns

    The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.

    If the pattern starts with a / then it only matches at the top level of the directory tree, relative to the root of the remote. If it doesn't start with / then it is matched starting at the end of the path, but it will only match a complete path element:

    @@ -465,9 +496,17 @@ y/e/d>
    \*.jpg       - matches "*.jpg"
     \\.jpg       - matches "\.jpg"
     \[one\].jpg  - matches "[one].jpg"
    +

    Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir

    +

    Directories

    +

    Rclone keeps track of directories that could match any file patterns.

    +

    Eg if you add the include rule

    +
    \a\*.jpg
    +

    Rclone will synthesize the directory include rule

    +
    \a\
    +

    If you put any rules which end in \ then it will only match directories.

    +

    Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.

    Differences between rsync and rclone patterns

    Rclone implements bash style {a,b,c} glob matching which rsync doesn't.

    -

    Rclone ignores / at the end of a pattern.

    Rclone always does a wildcard match so \ must always escape a \.

    How the rules are used

    Rclone maintains a list of include rules and exclude rules.

    @@ -490,6 +529,7 @@ y/e/d>
  • secret17.jpg
  • non *.jpg and *.png
  • +

    A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, drive, onedrive, amazon cloud drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).

    Adding filtering rules

    Filtering rules are added with the following command line flags.

    --exclude - Exclude files matching pattern

    @@ -680,7 +720,8 @@ file2.jpg

    Hash

    -

    The cloud storage system supports various hash types of the objects.
    The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

    +

    The cloud storage system supports various hash types of the objects.
    +The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

    To use the checksum checks between filesystems they must support a common hash type.

    ModTime

    The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.

    @@ -800,7 +841,12 @@ y/e/d> y

    If you prefer an archive copy then you might use --drive-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt.

    Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.

    Here are the possible extensions with their corresponding mime types.

    - +
    +++++ @@ -1007,6 +1053,13 @@ Choose a number from below, or type in your own value 9 / South America (Sao Paulo) Region. \ "sa-east-1" location_constraint> 1 +The server-side encryption algorithm used when storing this object in S3. +Choose a number from below, or type in your own value + 1 / None + \ "" + 2 / AES256 + \ "AES256" +server_side_encryption> Remote config -------------------- [remote] @@ -1167,6 +1220,8 @@ Choose a number from below, or type in your own value 6 / OVH \ "https://auth.cloud.ovh.net/v2.0" auth> 1 +User domain - optional (v3 auth) +domain> Default Tenant name - optional tenant> Region name - optional @@ -1174,6 +1229,8 @@ region> Storage URL - optional storage_url> Remote config +AuthVersion - optional - set to (1,2,3) if your auth URL has no version +auth_version> -------------------- [remote] user = user_name @@ -1205,6 +1262,12 @@ y/e/d> y

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    Limitations

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    +

    Troubleshooting

    +

    Rclone gives Failed to create file system for "remote:": Bad Request

    +

    Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.

    +

    So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.

    +

    Rclone gives Failed to create file system: Response didn't have storage storage url and auth token

    +

    This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

    Dropbox

    Paths are specified as remote:path

    Dropbox paths may be as deep as required, eg remote:directory/subdirectory.

    @@ -1326,6 +1389,8 @@ Google Application Client Secret - leave blank normally. client_secret> Project number optional - needed only for list/create/delete buckets - see your developer console. project_number> 12345678 +Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login. +service_account_file> Access Control List for new objects. Choose a number from below, or type in your own value * Object owner gets OWNER access, and all Authenticated Users get READER access. @@ -1390,6 +1455,10 @@ y/e/d> y
    rclone ls remote:bucket

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    rclone sync /home/local/directory remote:bucket
    +

    Service Account support

    +

    You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

    +

    To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

    +

    To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.

    Modified time

    Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.

    Amazon Cloud Drive

    @@ -1629,6 +1698,8 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an Hubic directory called backup

    rclone copy /home/source remote:backup
    +

    If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default directory

    +
    rclone copy /home/source remote:default/backup

    Modified time

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    @@ -1703,14 +1774,21 @@ y/e/d> y

    Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.

    SHA1 checksums

    The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. You can use the --checksum flag.

    +

    Large files which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.

    Versions

    When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will still be available.

    The old versions of files are visible in the B2 web interface, but not via rclone yet.

    Rclone doesn't provide any way of managing old versions (downloading them or deleting them) at the moment. When you purge a bucket, all the old versions will be deleted.

    Transfers

    Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about --transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though.

    +

    Specific options

    +

    Here are the command line options specific to this cloud storage system.

    +

    --b2-chunk-size valuee=SIZE

    +

    When uploading large files chunk the file into this size. Note that these chunks are buffered in memory. 100,000,000 Bytes is the minimim size (default 96M).

    +

    --b2-upload-cutoff=SIZE

    +

    Cutoff for switching to chunked upload (default 4.657GiB == 5GB). Files above this size will be uploaded in chunks of --b2-chunk-size. The default value is the largest file which can be uploaded without chunks.

    API

    -

    Here are some notes I made on the backblaze API while integrating it with rclone which detail the changes I'd like to see.

    +

    Here are some notes I made on the backblaze API while integrating it with rclone.

    Yandex Disk

    Yandex Disk is a cloud storage solution created by Yandex.

    Yandex paths may be as deep as required, eg remote:directory/subdirectory.

    @@ -1814,6 +1892,46 @@ nounc = true

    This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.

    Changelog

    Extension