Version v1.31

This commit is contained in:
Nick Craig-Wood 2016-07-13 12:26:22 +01:00
parent 96e2271cce
commit 63f6827a0d
7 changed files with 1421 additions and 199 deletions

View File

@ -12,7 +12,7 @@
<div id="header"> <div id="header">
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<h2 class="author">Nick Craig-Wood</h2> <h2 class="author">Nick Craig-Wood</h2>
<h3 class="date">Jun 18, 2016</h3> <h3 class="date">Jul 13, 2016</h3>
</div> </div>
<h1 id="rclone">Rclone</h1> <h1 id="rclone">Rclone</h1>
<p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p> <p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p>
@ -66,7 +66,17 @@ sudo chmod 755 /usr/sbin/rclone
#install manpage #install manpage
sudo mkdir -p /usr/local/share/man/man1 sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/ sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb</code></pre> sudo mandb </code></pre>
<h2 id="installation-with-ansible">Installation with Ansible</h2>
<p>This can be done with <a href="https://github.com/stefangweichinger/ansible-rclone">Stefan Weichinger's ansible role</a>.</p>
<p>Instructions</p>
<ol style="list-style-type: decimal">
<li><code>git clone https://github.com/stefangweichinger/ansible-rclone.git</code> into your local roles-directory</li>
<li>add the role to the hosts you want rclone installed to:</li>
</ol>
<pre><code> - hosts: rclone-hosts
roles:
- rclone</code></pre>
<h2 id="configure">Configure</h2> <h2 id="configure">Configure</h2>
<p>First you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file <code>.rclone.conf</code> in your home directory by default. (You can use the <code>--config</code> option to choose a different config file.)</p> <p>First you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file <code>.rclone.conf</code> in your home directory by default. (You can use the <code>--config</code> option to choose a different config file.)</p>
<p>The easiest way to make the config is to run rclone with the config option:</p> <p>The easiest way to make the config is to run rclone with the config option:</p>
@ -108,6 +118,7 @@ destpath/two.txt</code></pre>
<pre><code>destpath/sourcepath/one.txt <pre><code>destpath/sourcepath/one.txt
destpath/sourcepath/two.txt</code></pre> destpath/sourcepath/two.txt</code></pre>
<p>If you are familiar with <code>rsync</code>, rclone always works as if you had written a trailing / - meaning &quot;copy the contents of this directory&quot;. This applies to all commands and whether you are talking about the source or destination.</p> <p>If you are familiar with <code>rsync</code>, rclone always works as if you had written a trailing / - meaning &quot;copy the contents of this directory&quot;. This applies to all commands and whether you are talking about the source or destination.</p>
<p>See the <code>--no-traverse</code> option for controlling whether rclone lists the destination directory or not.</p>
<h3 id="rclone-sync-sourcepath-destpath">rclone sync source:path dest:path</h3> <h3 id="rclone-sync-sourcepath-destpath">rclone sync source:path dest:path</h3>
<p>Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.</p> <p>Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.</p>
<p><strong>Important</strong>: Since this can cause data loss, test first with the <code>--dry-run</code> flag to see exactly what would be copied and deleted.</p> <p><strong>Important</strong>: Since this can cause data loss, test first with the <code>--dry-run</code> flag to see exactly what would be copied and deleted.</p>
@ -115,9 +126,9 @@ destpath/sourcepath/two.txt</code></pre>
<p>It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the <code>copy</code> command above if unsure.</p> <p>It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the <code>copy</code> command above if unsure.</p>
<p>If dest:path doesn't exist, it is created and the source:path contents go there.</p> <p>If dest:path doesn't exist, it is created and the source:path contents go there.</p>
<h3 id="move-sourcepath-destpath">move source:path dest:path</h3> <h3 id="move-sourcepath-destpath">move source:path dest:path</h3>
<p>Moves the source to the destination.</p> <p>Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap.</p>
<p>If there are no filters in use this is equivalent to a copy followed by a purge, but may use server side operations to speed it up if possible.</p> <p>If no filters are in use and if possible this will server side move <code>source:path</code> into <code>dest:path</code>. After this <code>source:path</code> will no longer longer exist.</p>
<p>If filters are in use then it is equivalent to a copy followed by delete, followed by an rmdir (which only removes the directory if empty). The individual file moves will be moved with server side operations if possible.</p> <p>Otherwise for each file in <code>source:path</code> selected by the filters (if any) this will move it into <code>dest:path</code>. If possible a server side move will be used, otherwise it will copy it (server side if possible) into <code>dest:path</code> then delete the original (if no errors on copy) in <code>source:path</code>.</p>
<p><strong>Important</strong>: Since this can cause data loss, test first with the --dry-run flag.</p> <p><strong>Important</strong>: Since this can cause data loss, test first with the --dry-run flag.</p>
<h3 id="rclone-ls-remotepath">rclone ls remote:path</h3> <h3 id="rclone-ls-remotepath">rclone ls remote:path</h3>
<p>List all the objects in the path with size and path.</p> <p>List all the objects in the path with size and path.</p>
@ -149,6 +160,8 @@ rclone --dry-run --min-size 100M delete remote:path</code></pre>
<h3 id="rclone-check-sourcepath-destpath">rclone check source:path dest:path</h3> <h3 id="rclone-check-sourcepath-destpath">rclone check source:path dest:path</h3>
<p>Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.</p> <p>Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.</p>
<p><code>--size-only</code> may be used to only compare the sizes, not the MD5SUMs.</p> <p><code>--size-only</code> may be used to only compare the sizes, not the MD5SUMs.</p>
<h3 id="rclone-cleanup-remotepath">rclone cleanup remote:path</h3>
<p>Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.</p>
<h3 id="rclone-dedupe-remotepath">rclone dedupe remote:path</h3> <h3 id="rclone-dedupe-remotepath">rclone dedupe remote:path</h3>
<p>By default <code>dedup</code> interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.</p> <p>By default <code>dedup</code> interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.</p>
<p>The <code>dedupe</code> command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the <code>dedupe</code> command will not be interactive. You can use <code>--dry-run</code> to see what would happen without doing anything.</p> <p>The <code>dedupe</code> command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the <code>dedupe</code> command will not be interactive. You can use <code>--dry-run</code> to see what would happen without doing anything.</p>
@ -209,6 +222,16 @@ two-3.txt: renamed from: two.txt</code></pre>
<p>Enter an interactive configuration session.</p> <p>Enter an interactive configuration session.</p>
<h3 id="rclone-help">rclone help</h3> <h3 id="rclone-help">rclone help</h3>
<p>Prints help on rclone commands and options.</p> <p>Prints help on rclone commands and options.</p>
<h2 id="copying-single-files">Copying single files</h2>
<p>rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error <code>Failed to create file system for &quot;remote:file&quot;: is a file not a directory</code> if it isn't.</p>
<p>For example, suppose you have a remote with a file in called <code>test.jpg</code>, then you could copy just that file like this</p>
<pre><code>rclone copy remote:test.jpg /tmp/download</code></pre>
<p>The file <code>test.jpg</code> will be placed inside <code>/tmp/download</code>.</p>
<p>This is equivalent to specifying</p>
<pre><code>rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download</code></pre>
<p>Where <code>/tmp/files</code> contains the single line</p>
<pre><code>test.jpg</code></pre>
<p>It is recommended to use <code>copy</code> when copying single files not <code>sync</code>. They have pretty much the same effect but <code>copy</code> will use a lot less memory.</p>
<h2 id="quoting-and-the-shell">Quoting and the shell</h2> <h2 id="quoting-and-the-shell">Quoting and the shell</h2>
<p>When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.</p> <p>When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.</p>
<p>Here are some gotchas which may help users unfamiliar with the shell rules</p> <p>Here are some gotchas which may help users unfamiliar with the shell rules</p>
@ -291,6 +314,9 @@ rclone sync /path/to/files remote:current-backup</code></pre>
<h3 id="no-gzip-encoding">--no-gzip-encoding</h3> <h3 id="no-gzip-encoding">--no-gzip-encoding</h3>
<p>Don't set <code>Accept-Encoding: gzip</code>. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with <code>Content-Encoding: gzip</code> but you uploaded compressed files.</p> <p>Don't set <code>Accept-Encoding: gzip</code>. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with <code>Content-Encoding: gzip</code> but you uploaded compressed files.</p>
<p>There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.</p> <p>There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.</p>
<h3 id="no-update-modtime">--no-update-modtime</h3>
<p>When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.</p>
<p>This can be used if the remote is being synced with another tool also (eg the Google Drive client).</p>
<h3 id="q---quiet">-q, --quiet</h3> <h3 id="q---quiet">-q, --quiet</h3>
<p>Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.</p> <p>Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.</p>
<h3 id="retries-int">--retries int</h3> <h3 id="retries-int">--retries int</h3>
@ -306,8 +332,8 @@ rclone sync /path/to/files remote:current-backup</code></pre>
<p>The default is <code>1m</code>. Use 0 to disable.</p> <p>The default is <code>1m</code>. Use 0 to disable.</p>
<h3 id="delete-beforeduringafter">--delete-(before,during,after)</h3> <h3 id="delete-beforeduringafter">--delete-(before,during,after)</h3>
<p>This option allows you to specify when files on your destination are deleted when you sync folders.</p> <p>This option allows you to specify when files on your destination are deleted when you sync folders.</p>
<p>Specifying the value <code>--delete-before</code> will delete all files present on the destination, but not on the source <em>before</em> starting the transfer of any new or updated files.</p> <p>Specifying the value <code>--delete-before</code> will delete all files present on the destination, but not on the source <em>before</em> starting the transfer of any new or updated files. This uses extra memory as it has to store the source listing before proceeding.</p>
<p>Specifying <code>--delete-during</code> (default value) will delete files while checking and uploading files. This is usually the fastest option.</p> <p>Specifying <code>--delete-during</code> (default value) will delete files while checking and uploading files. This is usually the fastest option. Currently this works the same as <code>--delete-after</code> but it may change in the future.</p>
<p>Specifying <code>--delete-after</code> will delay deletion of files until all new/updated files have been successfully transfered.</p> <p>Specifying <code>--delete-after</code> will delay deletion of files until all new/updated files have been successfully transfered.</p>
<h3 id="timeouttime">--timeout=TIME</h3> <h3 id="timeouttime">--timeout=TIME</h3>
<p>This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.</p> <p>This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.</p>
@ -377,6 +403,11 @@ c/u/q&gt;</code></pre>
<p><code>--no-check-certificate</code> controls whether a client verifies the server's certificate chain and host name. If <code>--no-check-certificate</code> is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.</p> <p><code>--no-check-certificate</code> controls whether a client verifies the server's certificate chain and host name. If <code>--no-check-certificate</code> is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.</p>
<p>This option defaults to <code>false</code>.</p> <p>This option defaults to <code>false</code>.</p>
<p><strong>This should be used only for testing.</strong></p> <p><strong>This should be used only for testing.</strong></p>
<h3 id="no-traverse">--no-traverse</h3>
<p>The <code>--no-traverse</code> flag controls whether the destination file system is traversed when using the <code>copy</code> or <code>move</code> commands.</p>
<p>If you are only copying a small number of files and/or have a large number of files on the destination then <code>--no-traverse</code> will stop rclone listing the destination and save time.</p>
<p>However if you are copying a large number of files, escpecially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use <code>--no-traverse</code>.</p>
<p>It can also be used to reduce the memory usage of rclone when copying - <code>rclone --no-traverse copy src dst</code> won't load either the source or destination listings into memory so will use the minimum amount of memory.</p>
<h2 id="filtering">Filtering</h2> <h2 id="filtering">Filtering</h2>
<p>For the filtering options</p> <p>For the filtering options</p>
<ul> <ul>
@ -529,7 +560,7 @@ y/e/d&gt;</code></pre>
<li><code>secret17.jpg</code></li> <li><code>secret17.jpg</code></li>
<li>non <code>*.jpg</code> and <code>*.png</code></li> <li>non <code>*.jpg</code> and <code>*.png</code></li>
</ul> </ul>
<p>A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, drive, onedrive, amazon cloud drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).</p> <p>A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).</p>
<h2 id="adding-filtering-rules">Adding filtering rules</h2> <h2 id="adding-filtering-rules">Adding filtering rules</h2>
<p>Filtering rules are added with the following command line flags.</p> <p>Filtering rules are added with the following command line flags.</p>
<h3 id="exclude---exclude-files-matching-pattern"><code>--exclude</code> - Exclude files matching pattern</h3> <h3 id="exclude---exclude-files-matching-pattern"><code>--exclude</code> - Exclude files matching pattern</h3>
@ -579,6 +610,24 @@ file2.avi</code></pre>
file1.jpg file1.jpg
file2.jpg</code></pre> file2.jpg</code></pre>
<p>Then use as <code>--files-from files-from.txt</code>. This will only transfer <code>file1.jpg</code> and <code>file2.jpg</code> providing they exist.</p> <p>Then use as <code>--files-from files-from.txt</code>. This will only transfer <code>file1.jpg</code> and <code>file2.jpg</code> providing they exist.</p>
<p>For example, let's say you had a few files you want to back up regularly with these absolute paths:</p>
<pre><code>/home/user1/important
/home/user1/dir/file
/home/user2/stuff</code></pre>
<p>To copy these you'd find a common subdirectory - in this case <code>/home</code> and put the remaining files in <code>files-from.txt</code> with or without leading <code>/</code>, eg</p>
<pre><code>user1/important
user1/dir/file
user2/stuff</code></pre>
<p>You could then copy these to a remote like this</p>
<pre><code>rclone copy --files-from files-from.txt /home remote:backup</code></pre>
<p>The 3 files will arrive in <code>remote:backup</code> with the paths as in the <code>files-from.txt</code>.</p>
<p>You could of course choose <code>/</code> as the root too in which case your <code>files-from.txt</code> might look like this.</p>
<pre><code>/home/user1/important
/home/user1/dir/file
/home/user2/stuff</code></pre>
<p>And you would transfer it like this</p>
<pre><code>rclone copy --files-from files-from.txt / remote:backup</code></pre>
<p>In this case there will be an extra <code>home</code> directory on the remote.</p>
<h3 id="min-size---dont-transfer-any-file-smaller-than-this"><code>--min-size</code> - Don't transfer any file smaller than this</h3> <h3 id="min-size---dont-transfer-any-file-smaller-than-this"><code>--min-size</code> - Don't transfer any file smaller than this</h3>
<p>This option controls the minimum size file which will be transferred. This defaults to <code>kBytes</code> but a suffix of <code>k</code>, <code>M</code>, or <code>G</code> can be used.</p> <p>This option controls the minimum size file which will be transferred. This defaults to <code>kBytes</code> but a suffix of <code>k</code>, <code>M</code>, or <code>G</code> can be used.</p>
<p>For example <code>--min-size 50k</code> means no files smaller than 50kByte will be transferred.</p> <p>For example <code>--min-size 50k</code> means no files smaller than 50kByte will be transferred.</p>
@ -1161,6 +1210,40 @@ region = other-v2-signature</code></pre>
], ],
}</code></pre> }</code></pre>
<p>Because this is a json dump, it is encoding the <code>/</code> as <code>\/</code>, so if you use the secret key as <code>xxxxxx/xxxx</code> it will work fine.</p> <p>Because this is a json dump, it is encoding the <code>/</code> as <code>\/</code>, so if you use the secret key as <code>xxxxxx/xxxx</code> it will work fine.</p>
<h3 id="minio">Minio</h3>
<p><a href="https://minio.io/">Minio</a> is an object storage server built for cloud application developers and devops.</p>
<p>It is very easy to install and provides an S3 compatible server which can be used by rclone.</p>
<p>To use it, install Minio following the instructions from the web site.</p>
<p>When it configures itself Minio will print something like this</p>
<pre><code>AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
Minio Object Storage:
http://127.0.0.1:9000
http://10.0.0.3:9000
Minio Browser:
http://127.0.0.1:9000
http://10.0.0.3:9000</code></pre>
<p>These details need to go into <code>rclone config</code> like this. Note that it is important to put the region in as stated above.</p>
<pre><code>env_auth&gt; 1
access_key_id&gt; WLGDGYAQYIGI833EV05A
secret_access_key&gt; BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region&gt; us-east-1
endpoint&gt; http://10.0.0.3:9000
location_constraint&gt;
server_side_encryption&gt;</code></pre>
<p>Which makes the config file look like this</p>
<pre><code>[minio]
env_auth = false
access_key_id = WLGDGYAQYIGI833EV05A
secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region = us-east-1
endpoint = http://10.0.0.3:9000
location_constraint =
server_side_encryption = </code></pre>
<p>Minio doesn't support all the features of S3 yet. In particular it doesn't support MD5 checksums (ETags) or metadata. This means rclone can't check MD5SUMs or store the modified date. However you can work around this with the <code>--size-only</code> flag of rclone.</p>
<p>So once set up, for example to copy files into a bucket</p>
<pre><code>rclone --size-only copy /path/to/files minio:bucket</code></pre>
<h2 id="swift">Swift</h2> <h2 id="swift">Swift</h2>
<p>Swift refers to <a href="http://www.openstack.org/software/openstack-storage/">Openstack Object Storage</a>. Commercial implementations of that being:</p> <p>Swift refers to <a href="http://www.openstack.org/software/openstack-storage/">Openstack Object Storage</a>. Commercial implementations of that being:</p>
<ul> <ul>
@ -1224,6 +1307,8 @@ User domain - optional (v3 auth)
domain&gt; Default domain&gt; Default
Tenant name - optional Tenant name - optional
tenant&gt; tenant&gt;
Tenant domain - optional (v3 auth)
tenant_domain&gt;
Region name - optional Region name - optional
region&gt; region&gt;
Storage URL - optional Storage URL - optional
@ -1461,10 +1546,10 @@ y/e/d&gt; y</code></pre>
<p>To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the <code>service_account_file</code> prompt and rclone won't use the browser based authentication flow.</p> <p>To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the <code>service_account_file</code> prompt and rclone won't use the browser based authentication flow.</p>
<h3 id="modified-time-3">Modified time</h3> <h3 id="modified-time-3">Modified time</h3>
<p>Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the &quot;mtime&quot; key in RFC3339 format accurate to 1ns.</p> <p>Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the &quot;mtime&quot; key in RFC3339 format accurate to 1ns.</p>
<h2 id="amazon-cloud-drive">Amazon Drive</h2> <h2 id="amazon-drive">Amazon Drive</h2>
<p>Paths are specified as <code>remote:path</code></p> <p>Paths are specified as <code>remote:path</code></p>
<p>Paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p> <p>Paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
<p>The initial setup for Amazon cloud drive involves getting a token from Amazon which you need to do in your browser. <code>rclone config</code> walks you through it.</p> <p>The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. <code>rclone config</code> walks you through it.</p>
<p>Here is an example of how to make a remote called <code>remote</code>. First run:</p> <p>Here is an example of how to make a remote called <code>remote</code>. First run:</p>
<pre><code> rclone config</code></pre> <pre><code> rclone config</code></pre>
<p>This will guide you through an interactive setup process:</p> <p>This will guide you through an interactive setup process:</p>
@ -1520,26 +1605,26 @@ y/e/d&gt; y</code></pre>
<p>See the <a href="http://rclone.org/remote_setup/">remote setup docs</a> for how to set it up on a machine with no Internet browser available.</p> <p>See the <a href="http://rclone.org/remote_setup/">remote setup docs</a> for how to set it up on a machine with no Internet browser available.</p>
<p>Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on <code>http://127.0.0.1:53682/</code> and this it may require you to unblock it temporarily if you are running a host firewall.</p> <p>Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on <code>http://127.0.0.1:53682/</code> and this it may require you to unblock it temporarily if you are running a host firewall.</p>
<p>Once configured you can then use <code>rclone</code> like this,</p> <p>Once configured you can then use <code>rclone</code> like this,</p>
<p>List directories in top level of your Amazon cloud drive</p> <p>List directories in top level of your Amazon Drive</p>
<pre><code>rclone lsd remote:</code></pre> <pre><code>rclone lsd remote:</code></pre>
<p>List all the files in your Amazon cloud drive</p> <p>List all the files in your Amazon Drive</p>
<pre><code>rclone ls remote:</code></pre> <pre><code>rclone ls remote:</code></pre>
<p>To copy a local directory to an Amazon cloud drive directory called backup</p> <p>To copy a local directory to an Amazon Drive directory called backup</p>
<pre><code>rclone copy /home/source remote:backup</code></pre> <pre><code>rclone copy /home/source remote:backup</code></pre>
<h3 id="modified-time-and-md5sums-1">Modified time and MD5SUMs</h3> <h3 id="modified-time-and-md5sums-1">Modified time and MD5SUMs</h3>
<p>Amazon cloud drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.</p> <p>Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.</p>
<p>It does store MD5SUMs so for a more accurate sync, you can use the <code>--checksum</code> flag.</p> <p>It does store MD5SUMs so for a more accurate sync, you can use the <code>--checksum</code> flag.</p>
<h3 id="deleting-files-1">Deleting files</h3> <h3 id="deleting-files-1">Deleting files</h3>
<p>Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon cloud drive website.</p> <p>Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website.</p>
<h3 id="specific-options-3">Specific options</h3> <h3 id="specific-options-3">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4> <h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4>
<p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p> <p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p>
<p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p> <p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p>
<h3 id="limitations-3">Limitations</h3> <h3 id="limitations-3">Limitations</h3>
<p>Note that Amazon cloud drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p> <p>Note that Amazon Drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>Amazon cloud drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see <code>--retries</code> flag) which should hopefully work around this problem.</p> <p>Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see <code>--retries</code> flag) which should hopefully work around this problem.</p>
<p>Amazon cloud drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.</p> <p>Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.</p>
<p>At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.</p> <p>At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.</p>
<p>Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use <code>--max-size=50GB</code> option to limit the maximum size of uploaded files.</p> <p>Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use <code>--max-size=50GB</code> option to limit the maximum size of uploaded files.</p>
<h2 id="microsoft-one-drive">Microsoft One Drive</h2> <h2 id="microsoft-one-drive">Microsoft One Drive</h2>
@ -1775,20 +1860,65 @@ y/e/d&gt; y</code></pre>
<h3 id="sha1-checksums">SHA1 checksums</h3> <h3 id="sha1-checksums">SHA1 checksums</h3>
<p>The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. You can use the <code>--checksum</code> flag.</p> <p>The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. You can use the <code>--checksum</code> flag.</p>
<p>Large files which are uploaded in chunks will store their SHA1 on the object as <code>X-Bz-Info-large_file_sha1</code> as recommended by Backblaze.</p> <p>Large files which are uploaded in chunks will store their SHA1 on the object as <code>X-Bz-Info-large_file_sha1</code> as recommended by Backblaze.</p>
<h3 id="versions">Versions</h3>
<p>When rclone uploads a new version of a file it creates a <a href="https://www.backblaze.com/b2/docs/file_versions.html">new version of it</a>. Likewise when you delete a file, the old version will still be available.</p>
<p>The old versions of files are visible in the B2 web interface, but not via rclone yet.</p>
<p>Rclone doesn't provide any way of managing old versions (downloading them or deleting them) at the moment. When you <code>purge</code> a bucket, all the old versions will be deleted.</p>
<h3 id="transfers">Transfers</h3> <h3 id="transfers">Transfers</h3>
<p>Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about <code>--transfers 32</code> though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of <code>--transfers 4</code> is definitely too low for Backblaze B2 though.</p> <p>Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about <code>--transfers 32</code> though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of <code>--transfers 4</code> is definitely too low for Backblaze B2 though.</p>
<h3 id="versions">Versions</h3>
<p>When rclone uploads a new version of a file it creates a <a href="https://www.backblaze.com/b2/docs/file_versions.html">new version of it</a>. Likewise when you delete a file, the old version will still be available.</p>
<p>Old versions of files are visible using the <code>--b2-versions</code> flag.</p>
<p>If you wish to remove all the old versions then you can use the <code>rclone cleanup remote:bucket</code> command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg <code>rclone cleanup remote:bucket/path/to/stuff</code>.</p>
<p>When you <code>purge</code> a bucket, the current and the old versions will be deleted then the bucket will be deleted.</p>
<p>However <code>delete</code> will cause the current versions of the files to become hidden old versions.</p>
<p>Here is a session showing the listing and and retreival of an old version followed by a <code>cleanup</code> of the old versions.</p>
<p>Show current version and all the versions with <code>--b2-versions</code> flag.</p>
<pre><code>$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt</code></pre>
<p>Retreive an old verson</p>
<pre><code>$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt</code></pre>
<p>Clean up all the old versions and show that they've gone.</p>
<pre><code>$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt</code></pre>
<h3 id="specific-options-5">Specific options</h3> <h3 id="specific-options-5">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="b2-chunk-size-valueesize">--b2-chunk-size valuee=SIZE</h4> <h4 id="b2-chunk-size-valueesize">--b2-chunk-size valuee=SIZE</h4>
<p>When uploading large files chunk the file into this size. Note that these chunks are buffered in memory. 100,000,000 Bytes is the minimim size (default 96M).</p> <p>When uploading large files chunk the file into this size. Note that these chunks are buffered in memory. 100,000,000 Bytes is the minimim size (default 96M).</p>
<h4 id="b2-upload-cutoffsize">--b2-upload-cutoff=SIZE</h4> <h4 id="b2-upload-cutoffsize">--b2-upload-cutoff=SIZE</h4>
<p>Cutoff for switching to chunked upload (default 4.657GiB == 5GB). Files above this size will be uploaded in chunks of <code>--b2-chunk-size</code>. The default value is the largest file which can be uploaded without chunks.</p> <p>Cutoff for switching to chunked upload (default 4.657GiB == 5GB). Files above this size will be uploaded in chunks of <code>--b2-chunk-size</code>. The default value is the largest file which can be uploaded without chunks.</p>
<h3 id="api">API</h3> <h4 id="b2-test-modeflag">--b2-test-mode=FLAG</h4>
<p>Here are <a href="https://gist.github.com/ncw/166dabf352b399f1cc1c">some notes I made on the backblaze API</a> while integrating it with rclone.</p> <p>This is for debugging purposes only.</p>
<p>Setting FLAG to one of the strings below will cause b2 to return specific errors for debugging purposes.</p>
<ul>
<li><code>fail_some_uploads</code></li>
<li><code>expire_some_account_authorization_tokens</code></li>
<li><code>force_cap_exceeded</code></li>
</ul>
<p>These will be set in the <code>X-Bz-Test-Mode</code> header which is documented in the <a href="https://www.backblaze.com/b2/docs/integration_checklist.html">b2 integrations checklist</a>.</p>
<h4 id="b2-versions">--b2-versions</h4>
<p>When set rclone will show and act on older versions of files. For example</p>
<p>Listing without <code>--b2-versions</code></p>
<pre><code>$ rclone -q ls b2:cleanup-test
9 one.txt</code></pre>
<p>And with</p>
<pre><code>$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt</code></pre>
<p>Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.</p>
<p>Note that when using <code>--b2-versions</code> no file write operations are permitted, so you can't upload files or delete them.</p>
<h2 id="yandex-disk">Yandex Disk</h2> <h2 id="yandex-disk">Yandex Disk</h2>
<p><a href="https://disk.yandex.com">Yandex Disk</a> is a cloud storage solution created by <a href="http://yandex.com">Yandex</a>.</p> <p><a href="https://disk.yandex.com">Yandex Disk</a> is a cloud storage solution created by <a href="http://yandex.com">Yandex</a>.</p>
<p>Yandex paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p> <p>Yandex paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
@ -1892,6 +2022,50 @@ nounc = true</code></pre>
<p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p> <p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p>
<h2 id="changelog">Changelog</h2> <h2 id="changelog">Changelog</h2>
<ul> <ul>
<li>v1.31 - 2016-07-13
<ul>
<li>New Features</li>
<li>Reduce memory on sync by about 50%</li>
<li>Implement --no-traverse flag to stop copy traversing the destination remote.
<ul>
<li>This can be used to reduce memory usage down to the smallest possible.</li>
<li>Useful to copy a small number of files into a large destination folder.</li>
</ul></li>
<li>Implement cleanup command for emptying trash / removing old versions of files
<ul>
<li>Currently B2 only</li>
</ul></li>
<li>Single file handling improved
<ul>
<li>Now copied with --files-from</li>
<li>Automatically sets --no-traverse when copying a single file</li>
</ul></li>
<li>Info on using installing with ansible - thanks Stefan Weichinger</li>
<li>Implement --no-update-modtime flag to stop rclone fixing the remote modified times.</li>
<li>Bug Fixes</li>
<li>Fix move command - stop it running for overlapping Fses - this was causing data loss.</li>
<li>Local</li>
<li>Fix incomplete hashes - this was causing problems for B2.</li>
<li>Amazon Drive</li>
<li>Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.</li>
<li>Swift</li>
<li>Add support for non-default project domain - thanks Antonio Messina.</li>
<li>S3</li>
<li>Add instructions on how to use rclone with minio.</li>
<li>Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.</li>
<li>Skip setting the modified time for objects &gt; 5GB as it isn't possible.</li>
<li>Backblaze B2</li>
<li>Add --b2-versions flag so old versions can be listed and retreived.</li>
<li>Treat 403 errors (eg cap exceeded) as fatal.</li>
<li>Implement cleanup command for deleting old file versions.</li>
<li>Make error handling compliant with B2 integrations notes.</li>
<li>Fix handling of token expiry.</li>
<li>Implement --b2-test-mode to set <code>X-Bz-Test-Mode</code> header.</li>
<li>Set cutoff for chunked upload to 200MB as per B2 guidelines.</li>
<li>Make upload multi-threaded.</li>
<li>Dropbox</li>
<li>Don't retry 461 errors.</li>
</ul></li>
<li>v1.30 - 2016-06-18 <li>v1.30 - 2016-06-18
<ul> <ul>
<li>New Features</li> <li>New Features</li>
@ -2562,6 +2736,18 @@ h='&#x67;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#106;&#1
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>'); document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// --> // -->
</script><noscript>&#106;&#114;&#x77;&#x39;&#x37;&#50;&#32;&#x61;&#116;&#32;&#x67;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li> </script><noscript>&#106;&#114;&#x77;&#x39;&#x37;&#50;&#32;&#x61;&#116;&#32;&#x67;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Antonio Messina <script type="text/javascript">
<!--
h='&#x67;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#x61;&#110;&#116;&#x6f;&#110;&#x69;&#x6f;&#46;&#x73;&#46;&#x6d;&#x65;&#x73;&#x73;&#x69;&#110;&#x61;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x61;&#110;&#116;&#x6f;&#110;&#x69;&#x6f;&#46;&#x73;&#46;&#x6d;&#x65;&#x73;&#x73;&#x69;&#110;&#x61;&#32;&#x61;&#116;&#32;&#x67;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Stefan G. Weichinger <script type="text/javascript">
<!--
h='&#x6f;&#x6f;&#112;&#x73;&#46;&#x63;&#x6f;&#46;&#x61;&#116;';a='&#64;';n='&#x6f;&#102;&#102;&#x69;&#x63;&#x65;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x6f;&#102;&#102;&#x69;&#x63;&#x65;&#32;&#x61;&#116;&#32;&#x6f;&#x6f;&#112;&#x73;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#32;&#100;&#x6f;&#116;&#32;&#x61;&#116;</noscript></li>
</ul> </ul>
<h2 id="contact-the-rclone-project">Contact the rclone project</h2> <h2 id="contact-the-rclone-project">Contact the rclone project</h2>
<p>The project website is at:</p> <p>The project website is at:</p>

391
MANUAL.md
View File

@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Jun 18, 2016 % Jul 13, 2016
Rclone Rclone
====== ======
@ -72,6 +72,23 @@ linux binary downloaded files install example
sudo cp rclone.1 /usr/local/share/man/man1/ sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb sudo mandb
Installation with Ansible
-------
This can be done with [Stefan Weichinger's ansible
role](https://github.com/stefangweichinger/ansible-rclone).
Instructions
1. `git clone https://github.com/stefangweichinger/ansible-rclone.git` into your local roles-directory
2. add the role to the hosts you want rclone installed to:
```
- hosts: rclone-hosts
roles:
- rclone
```
Configure Configure
--------- ---------
@ -155,6 +172,9 @@ written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the This applies to all commands and whether you are talking about the
source or destination. source or destination.
See the `--no-traverse` option for controlling whether rclone lists
the destination directory or not.
### rclone sync source:path dest:path ### ### rclone sync source:path dest:path ###
Sync the source to the destination, changing the destination Sync the source to the destination, changing the destination
@ -178,16 +198,18 @@ go there.
### move source:path dest:path ### ### move source:path dest:path ###
Moves the source to the destination. Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap.
If there are no filters in use this is equivalent to a copy followed If no filters are in use and if possible this will server side move
by a purge, but may use server side operations to speed it up if `source:path` into `dest:path`. After this `source:path` will no
possible. longer longer exist.
If filters are in use then it is equivalent to a copy followed by Otherwise for each file in `source:path` selected by the filters (if
delete, followed by an rmdir (which only removes the directory if any) this will move it into `dest:path`. If possible a server side
empty). The individual file moves will be moved with server side move will be used, otherwise it will copy it (server side if possible)
operations if possible. into `dest:path` then delete the original (if no errors on copy) in
`source:path`.
**Important**: Since this can cause data loss, test first with the **Important**: Since this can cause data loss, test first with the
--dry-run flag. --dry-run flag.
@ -262,6 +284,11 @@ don't match. It doesn't alter the source or destination.
`--size-only` may be used to only compare the sizes, not the MD5SUMs. `--size-only` may be used to only compare the sizes, not the MD5SUMs.
### rclone cleanup remote:path ###
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
### rclone dedupe remote:path ### ### rclone dedupe remote:path ###
By default `dedup` interactively finds duplicate files and offers to By default `dedup` interactively finds duplicate files and offers to
@ -349,6 +376,34 @@ Enter an interactive configuration session.
Prints help on rclone commands and options. Prints help on rclone commands and options.
Copying single files
--------------------
rclone normally syncs or copies directories. However if the source
remote points to a file, rclone will just copy that file. The
destination remote must point to a directory - rclone will give the
error `Failed to create file system for "remote:file": is a file not a
directory` if it isn't.
For example, suppose you have a remote with a file in called
`test.jpg`, then you could copy just that file like this
rclone copy remote:test.jpg /tmp/download
The file `test.jpg` will be placed inside `/tmp/download`.
This is equivalent to specifying
rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
Where `/tmp/files` contains the single line
test.jpg
It is recommended to use `copy` when copying single files not `sync`.
They have pretty much the same effect but `copy` will use a lot less
memory.
Quoting and the shell Quoting and the shell
--------------------- ---------------------
@ -592,6 +647,14 @@ uploaded compressed files.
There is no need to set this in normal operation, and doing so will There is no need to set this in normal operation, and doing so will
decrease the network transfer efficiency of rclone. decrease the network transfer efficiency of rclone.
### --no-update-modtime ###
When using this flag, rclone won't update modification times of remote
files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also
(eg the Google Drive client).
### -q, --quiet ### ### -q, --quiet ###
Normally rclone outputs stats and a completion message. If you set Normally rclone outputs stats and a completion message. If you set
@ -629,12 +692,15 @@ The default is `1m`. Use 0 to disable.
This option allows you to specify when files on your destination are This option allows you to specify when files on your destination are
deleted when you sync folders. deleted when you sync folders.
Specifying the value `--delete-before` will delete all files present on the Specifying the value `--delete-before` will delete all files present
destination, but not on the source *before* starting the transfer on the destination, but not on the source *before* starting the
of any new or updated files. transfer of any new or updated files. This uses extra memory as it
has to store the source listing before proceeding.
Specifying `--delete-during` (default value) will delete files while checking Specifying `--delete-during` (default value) will delete files while
and uploading files. This is usually the fastest option. checking and uploading files. This is usually the fastest option.
Currently this works the same as `--delete-after` but it may change in
the future.
Specifying `--delete-after` will delay deletion of files until all new/updated Specifying `--delete-after` will delay deletion of files until all new/updated
files have been successfully transfered. files have been successfully transfered.
@ -799,6 +865,24 @@ This option defaults to `false`.
**This should be used only for testing.** **This should be used only for testing.**
### --no-traverse ###
The `--no-traverse` flag controls whether the destination file system
is traversed when using the `copy` or `move` commands.
If you are only copying a small number of files and/or have a large
number of files on the destination then `--no-traverse` will stop
rclone listing the destination and save time.
However if you are copying a large number of files, escpecially if you
are doing a copy where lots of the files haven't changed and won't
need copying then you shouldn't use `--no-traverse`.
It can also be used to reduce the memory usage of rclone when copying
- `rclone --no-traverse copy src dst` won't load either the source or
destination listings into memory so will use the minimum amount of
memory.
Filtering Filtering
--------- ---------
@ -1075,7 +1159,7 @@ This would exclude
A similar process is done on directory entries before recursing into A similar process is done on directory entries before recursing into
them. This only works on remotes which have a concept of directory them. This only works on remotes which have a concept of directory
(Eg local, drive, onedrive, amazon cloud drive) and not on bucket (Eg local, google drive, onedrive, amazon drive) and not on bucket
based remotes (eg s3, swift, google compute storage, b2). based remotes (eg s3, swift, google compute storage, b2).
## Adding filtering rules ## ## Adding filtering rules ##
@ -1182,6 +1266,41 @@ Prepare a file like this `files-from.txt`
Then use as `--files-from files-from.txt`. This will only transfer Then use as `--files-from files-from.txt`. This will only transfer
`file1.jpg` and `file2.jpg` providing they exist. `file1.jpg` and `file2.jpg` providing they exist.
For example, let's say you had a few files you want to back up
regularly with these absolute paths:
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
To copy these you'd find a common subdirectory - in this case `/home`
and put the remaining files in `files-from.txt` with or without
leading `/`, eg
user1/important
user1/dir/file
user2/stuff
You could then copy these to a remote like this
rclone copy --files-from files-from.txt /home remote:backup
The 3 files will arrive in `remote:backup` with the paths as in the
`files-from.txt`.
You could of course choose `/` as the root too in which case your
`files-from.txt` might look like this.
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
And you would transfer it like this
rclone copy --files-from files-from.txt / remote:backup
In this case there will be an extra `home` directory on the remote.
### `--min-size` - Don't transfer any file smaller than this ### ### `--min-size` - Don't transfer any file smaller than this ###
This option controls the minimum size file which will be transferred. This option controls the minimum size file which will be transferred.
@ -1819,6 +1938,63 @@ removed).
Because this is a json dump, it is encoding the `/` as `\/`, so if you Because this is a json dump, it is encoding the `/` as `\/`, so if you
use the secret key as `xxxxxx/xxxx` it will work fine. use the secret key as `xxxxxx/xxxx` it will work fine.
### Minio ###
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
To use it, install Minio following the instructions from the web site.
When it configures itself Minio will print something like this
```
AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
Minio Object Storage:
http://127.0.0.1:9000
http://10.0.0.3:9000
Minio Browser:
http://127.0.0.1:9000
http://10.0.0.3:9000
```
These details need to go into `rclone config` like this. Note that it
is important to put the region in as stated above.
```
env_auth> 1
access_key_id> WLGDGYAQYIGI833EV05A
secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region> us-east-1
endpoint> http://10.0.0.3:9000
location_constraint>
server_side_encryption>
```
Which makes the config file look like this
```
[minio]
env_auth = false
access_key_id = WLGDGYAQYIGI833EV05A
secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region = us-east-1
endpoint = http://10.0.0.3:9000
location_constraint =
server_side_encryption =
```
Minio doesn't support all the features of S3 yet. In particular it
doesn't support MD5 checksums (ETags) or metadata. This means rclone
can't check MD5SUMs or store the modified date. However you can work
around this with the `--size-only` flag of rclone.
So once set up, for example to copy files into a bucket
rclone --size-only copy /path/to/files minio:bucket
Swift Swift
---------------------------------------- ----------------------------------------
@ -1891,6 +2067,8 @@ User domain - optional (v3 auth)
domain> Default domain> Default
Tenant name - optional Tenant name - optional
tenant> tenant>
Tenant domain - optional (v3 auth)
tenant_domain>
Region name - optional Region name - optional
region> region>
Storage URL - optional Storage URL - optional
@ -2271,7 +2449,7 @@ Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`. Paths may be as deep as required, eg `remote:directory/subdirectory`.
The initial setup for Amazon cloud drive involves getting a token from The initial setup for Amazon Drive involves getting a token from
Amazon which you need to do in your browser. `rclone config` walks Amazon which you need to do in your browser. `rclone config` walks
you through it. you through it.
@ -2344,21 +2522,21 @@ you to unblock it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this, Once configured you can then use `rclone` like this,
List directories in top level of your Amazon cloud drive List directories in top level of your Amazon Drive
rclone lsd remote: rclone lsd remote:
List all the files in your Amazon cloud drive List all the files in your Amazon Drive
rclone ls remote: rclone ls remote:
To copy a local directory to an Amazon cloud drive directory called backup To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup rclone copy /home/source remote:backup
### Modified time and MD5SUMs ### ### Modified time and MD5SUMs ###
Amazon cloud drive doesn't allow modification times to be changed via Amazon Drive doesn't allow modification times to be changed via
the API so these won't be accurate or used for syncing. the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the It does store MD5SUMs so for a more accurate sync, you can use the
@ -2369,7 +2547,7 @@ It does store MD5SUMs so for a more accurate sync, you can use the
Any files you delete with rclone will end up in the trash. Amazon Any files you delete with rclone will end up in the trash. Amazon
don't provide an API to permanently delete files, nor to empty the don't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Amazon's apps or via trash, so you will have to do that with one of Amazon's apps or via
the Amazon cloud drive website. the Amazon Drive website.
### Specific options ### ### Specific options ###
@ -2379,9 +2557,9 @@ system.
#### --acd-templink-threshold=SIZE #### #### --acd-templink-threshold=SIZE ####
Files this size or more will be downloaded via their `tempLink`. This Files this size or more will be downloaded via their `tempLink`. This
is to work around a problem with Amazon Drive which blocks is to work around a problem with Amazon Drive which blocks downloads
downloads of files bigger than about 10GB. The default for this is of files bigger than about 10GB. The default for this is 9GB which
9GB which shouldn't need to be changed. shouldn't need to be changed.
To download files above this threshold, rclone requests a `tempLink` To download files above this threshold, rclone requests a `tempLink`
which downloads the file through a temporary URL directly from the which downloads the file through a temporary URL directly from the
@ -2389,17 +2567,17 @@ underlying S3 storage.
### Limitations ### ### Limitations ###
Note that Amazon cloud drive is case insensitive so you can't have a Note that Amazon Drive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc". file called "Hello.doc" and one called "hello.doc".
Amazon cloud drive has rate limiting so you may notice errors in the Amazon Drive has rate limiting so you may notice errors in the
sync (429 errors). rclone will automatically retry the sync up to 3 sync (429 errors). rclone will automatically retry the sync up to 3
times by default (see `--retries` flag) which should hopefully work times by default (see `--retries` flag) which should hopefully work
around this problem. around this problem.
Amazon cloud drive has an internal limit of file sizes that can be Amazon Drive has an internal limit of file sizes that can be uploaded
uploaded to the service. This limit is not officially published, to the service. This limit is not officially published, but all files
but all files larger than this will fail. larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. At the time of writing (Jan 2016) is in the area of 50GB per file.
This means that larger files are likely to fail. This means that larger files are likely to fail.
@ -2782,20 +2960,6 @@ will be used in the syncing process. You can use the `--checksum` flag.
Large files which are uploaded in chunks will store their SHA1 on the Large files which are uploaded in chunks will store their SHA1 on the
object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze. object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze.
### Versions ###
When rclone uploads a new version of a file it creates a [new version
of it](https://www.backblaze.com/b2/docs/file_versions.html).
Likewise when you delete a file, the old version will still be
available.
The old versions of files are visible in the B2 web interface, but not
via rclone yet.
Rclone doesn't provide any way of managing old versions (downloading
them or deleting them) at the moment. When you `purge` a bucket, all
the old versions will be deleted.
### Transfers ### ### Transfers ###
Backblaze recommends that you do lots of transfers simultaneously for Backblaze recommends that you do lots of transfers simultaneously for
@ -2806,6 +2970,64 @@ depending on your hardware, how big the files are, how much you want
to load your computer, etc. The default of `--transfers 4` is to load your computer, etc. The default of `--transfers 4` is
definitely too low for Backblaze B2 though. definitely too low for Backblaze B2 though.
### Versions ###
When rclone uploads a new version of a file it creates a [new version
of it](https://www.backblaze.com/b2/docs/file_versions.html).
Likewise when you delete a file, the old version will still be
available.
Old versions of files are visible using the `--b2-versions` flag.
If you wish to remove all the old versions then you can use the
`rclone cleanup remote:bucket` command which will delete all the old
versions of files, leaving the current ones intact. You can also
supply a path and only old versions under that path will be deleted,
eg `rclone cleanup remote:bucket/path/to/stuff`.
When you `purge` a bucket, the current and the old versions will be
deleted then the bucket will be deleted.
However `delete` will cause the current versions of the files to
become hidden old versions.
Here is a session showing the listing and and retreival of an old
version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--b2-versions` flag.
```
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
```
Retreive an old verson
```
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
```
Clean up all the old versions and show that they've gone.
```
$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
```
### Specific options ### ### Specific options ###
Here are the command line options specific to this cloud storage Here are the command line options specific to this cloud storage
@ -2824,11 +3046,48 @@ Cutoff for switching to chunked upload (default 4.657GiB ==
`--b2-chunk-size`. The default value is the largest file which can be `--b2-chunk-size`. The default value is the largest file which can be
uploaded without chunks. uploaded without chunks.
### API ### #### --b2-test-mode=FLAG ####
Here are [some notes I made on the backblaze This is for debugging purposes only.
API](https://gist.github.com/ncw/166dabf352b399f1cc1c) while
integrating it with rclone. Setting FLAG to one of the strings below will cause b2 to return
specific errors for debugging purposes.
* `fail_some_uploads`
* `expire_some_account_authorization_tokens`
* `force_cap_exceeded`
These will be set in the `X-Bz-Test-Mode` header which is documented
in the [b2 integrations
checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
#### --b2-versions ####
When set rclone will show and act on older versions of files. For example
Listing without `--b2-versions`
```
$ rclone -q ls b2:cleanup-test
9 one.txt
```
And with
```
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
```
Showing that the current version is unchanged but older versions can
be seen. These have the UTC date that they were uploaded to the
server to the nearest millisecond appended to them.
Note that when using `--b2-versions` no file write operations are
permitted, so you can't upload files or delete them.
Yandex Disk Yandex Disk
---------------------------------------- ----------------------------------------
@ -3012,6 +3271,42 @@ file exceeds 258 characters on z, so only use this option if you have to.
Changelog Changelog
--------- ---------
* v1.31 - 2016-07-13
* New Features
* Reduce memory on sync by about 50%
* Implement --no-traverse flag to stop copy traversing the destination remote.
* This can be used to reduce memory usage down to the smallest possible.
* Useful to copy a small number of files into a large destination folder.
* Implement cleanup command for emptying trash / removing old versions of files
* Currently B2 only
* Single file handling improved
* Now copied with --files-from
* Automatically sets --no-traverse when copying a single file
* Info on using installing with ansible - thanks Stefan Weichinger
* Implement --no-update-modtime flag to stop rclone fixing the remote modified times.
* Bug Fixes
* Fix move command - stop it running for overlapping Fses - this was causing data loss.
* Local
* Fix incomplete hashes - this was causing problems for B2.
* Amazon Drive
* Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.
* Swift
* Add support for non-default project domain - thanks Antonio Messina.
* S3
* Add instructions on how to use rclone with minio.
* Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
* Skip setting the modified time for objects > 5GB as it isn't possible.
* Backblaze B2
* Add --b2-versions flag so old versions can be listed and retreived.
* Treat 403 errors (eg cap exceeded) as fatal.
* Implement cleanup command for deleting old file versions.
* Make error handling compliant with B2 integrations notes.
* Fix handling of token expiry.
* Implement --b2-test-mode to set `X-Bz-Test-Mode` header.
* Set cutoff for chunked upload to 200MB as per B2 guidelines.
* Make upload multi-threaded.
* Dropbox
* Don't retry 461 errors.
* v1.30 - 2016-06-18 * v1.30 - 2016-06-18
* New Features * New Features
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables * Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
@ -3593,6 +3888,8 @@ Contributors
* Leigh Klotz <klotz@quixey.com> * Leigh Klotz <klotz@quixey.com>
* Romain Lapray <lapray.romain@gmail.com> * Romain Lapray <lapray.romain@gmail.com>
* Justin R. Wilson <jrw972@gmail.com> * Justin R. Wilson <jrw972@gmail.com>
* Antonio Messina <antonio.s.messina@gmail.com>
* Stefan G. Weichinger <office@oops.co.at>
Contact the rclone project Contact the rclone project
-------------------------- --------------------------

View File

@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Jun 18, 2016 Jul 13, 2016
@ -75,6 +75,21 @@ linux binary downloaded files install example
sudo mandb sudo mandb
Installation with Ansible
This can be done with Stefan Weichinger's ansible role.
Instructions
1. git clone https://github.com/stefangweichinger/ansible-rclone.git
into your local roles-directory
2. add the role to the hosts you want rclone installed to:
- hosts: rclone-hosts
roles:
- rclone
Configure Configure
First you'll need to configure rclone. As the object storage systems First you'll need to configure rclone. As the object storage systems
@ -156,6 +171,9 @@ written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the This applies to all commands and whether you are talking about the
source or destination. source or destination.
See the --no-traverse option for controlling whether rclone lists the
destination directory or not.
rclone sync source:path dest:path rclone sync source:path dest:path
Sync the source to the destination, changing the destination only. Sync the source to the destination, changing the destination only.
@ -179,15 +197,17 @@ go there.
move source:path dest:path move source:path dest:path
Moves the source to the destination. Moves the contents of the source directory to the destination directory.
Rclone will error if the source and destination overlap.
If there are no filters in use this is equivalent to a copy followed by If no filters are in use and if possible this will server side move
a purge, but may use server side operations to speed it up if possible. source:path into dest:path. After this source:path will no longer longer
exist.
If filters are in use then it is equivalent to a copy followed by Otherwise for each file in source:path selected by the filters (if any)
delete, followed by an rmdir (which only removes the directory if this will move it into dest:path. If possible a server side move will be
empty). The individual file moves will be moved with server side used, otherwise it will copy it (server side if possible) into dest:path
operations if possible. then delete the original (if no errors on copy) in source:path.
IMPORTANT: Since this can cause data loss, test first with the --dry-run IMPORTANT: Since this can cause data loss, test first with the --dry-run
flag. flag.
@ -262,6 +282,11 @@ alter the source or destination.
--size-only may be used to only compare the sizes, not the MD5SUMs. --size-only may be used to only compare the sizes, not the MD5SUMs.
rclone cleanup remote:path
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
rclone dedupe remote:path rclone dedupe remote:path
By default dedup interactively finds duplicate files and offers to By default dedup interactively finds duplicate files and offers to
@ -350,6 +375,34 @@ rclone help
Prints help on rclone commands and options. Prints help on rclone commands and options.
Copying single files
rclone normally syncs or copies directories. However if the source
remote points to a file, rclone will just copy that file. The
destination remote must point to a directory - rclone will give the
error
Failed to create file system for "remote:file": is a file not a directory
if it isn't.
For example, suppose you have a remote with a file in called test.jpg,
then you could copy just that file like this
rclone copy remote:test.jpg /tmp/download
The file test.jpg will be placed inside /tmp/download.
This is equivalent to specifying
rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
Where /tmp/files contains the single line
test.jpg
It is recommended to use copy when copying single files not sync. They
have pretty much the same effect but copy will use a lot less memory.
Quoting and the shell Quoting and the shell
When you are typing commands to your computer you are using something When you are typing commands to your computer you are using something
@ -589,6 +642,14 @@ compressed files.
There is no need to set this in normal operation, and doing so will There is no need to set this in normal operation, and doing so will
decrease the network transfer efficiency of rclone. decrease the network transfer efficiency of rclone.
--no-update-modtime
When using this flag, rclone won't update modification times of remote
files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also
(eg the Google Drive client).
-q, --quiet -q, --quiet
Normally rclone outputs stats and a completion message. If you set this Normally rclone outputs stats and a completion message. If you set this
@ -628,10 +689,13 @@ deleted when you sync folders.
Specifying the value --delete-before will delete all files present on Specifying the value --delete-before will delete all files present on
the destination, but not on the source _before_ starting the transfer of the destination, but not on the source _before_ starting the transfer of
any new or updated files. any new or updated files. This uses extra memory as it has to store the
source listing before proceeding.
Specifying --delete-during (default value) will delete files while Specifying --delete-during (default value) will delete files while
checking and uploading files. This is usually the fastest option. checking and uploading files. This is usually the fastest option.
Currently this works the same as --delete-after but it may change in the
future.
Specifying --delete-after will delay deletion of files until all Specifying --delete-after will delay deletion of files until all
new/updated files have been successfully transfered. new/updated files have been successfully transfered.
@ -790,6 +854,24 @@ This option defaults to false.
THIS SHOULD BE USED ONLY FOR TESTING. THIS SHOULD BE USED ONLY FOR TESTING.
--no-traverse
The --no-traverse flag controls whether the destination file system is
traversed when using the copy or move commands.
If you are only copying a small number of files and/or have a large
number of files on the destination then --no-traverse will stop rclone
listing the destination and save time.
However if you are copying a large number of files, escpecially if you
are doing a copy where lots of the files haven't changed and won't need
copying then you shouldn't use --no-traverse.
It can also be used to reduce the memory usage of rclone when copying -
rclone --no-traverse copy src dst won't load either the source or
destination listings into memory so will use the minimum amount of
memory.
Filtering Filtering
@ -1064,7 +1146,7 @@ This would exclude
A similar process is done on directory entries before recursing into A similar process is done on directory entries before recursing into
them. This only works on remotes which have a concept of directory (Eg them. This only works on remotes which have a concept of directory (Eg
local, drive, onedrive, amazon cloud drive) and not on bucket based local, google drive, onedrive, amazon drive) and not on bucket based
remotes (eg s3, swift, google compute storage, b2). remotes (eg s3, swift, google compute storage, b2).
@ -1172,6 +1254,40 @@ Prepare a file like this files-from.txt
Then use as --files-from files-from.txt. This will only transfer Then use as --files-from files-from.txt. This will only transfer
file1.jpg and file2.jpg providing they exist. file1.jpg and file2.jpg providing they exist.
For example, let's say you had a few files you want to back up regularly
with these absolute paths:
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
To copy these you'd find a common subdirectory - in this case /home and
put the remaining files in files-from.txt with or without leading /, eg
user1/important
user1/dir/file
user2/stuff
You could then copy these to a remote like this
rclone copy --files-from files-from.txt /home remote:backup
The 3 files will arrive in remote:backup with the paths as in the
files-from.txt.
You could of course choose / as the root too in which case your
files-from.txt might look like this.
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
And you would transfer it like this
rclone copy --files-from files-from.txt / remote:backup
In this case there will be an extra home directory on the remote.
--min-size - Don't transfer any file smaller than this --min-size - Don't transfer any file smaller than this
This option controls the minimum size file which will be transferred. This option controls the minimum size file which will be transferred.
@ -1862,6 +1978,59 @@ removed).
Because this is a json dump, it is encoding the / as \/, so if you use Because this is a json dump, it is encoding the / as \/, so if you use
the secret key as xxxxxx/xxxx it will work fine. the secret key as xxxxxx/xxxx it will work fine.
Minio
Minio is an object storage server built for cloud application developers
and devops.
It is very easy to install and provides an S3 compatible server which
can be used by rclone.
To use it, install Minio following the instructions from the web site.
When it configures itself Minio will print something like this
AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
Minio Object Storage:
http://127.0.0.1:9000
http://10.0.0.3:9000
Minio Browser:
http://127.0.0.1:9000
http://10.0.0.3:9000
These details need to go into rclone config like this. Note that it is
important to put the region in as stated above.
env_auth> 1
access_key_id> WLGDGYAQYIGI833EV05A
secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region> us-east-1
endpoint> http://10.0.0.3:9000
location_constraint>
server_side_encryption>
Which makes the config file look like this
[minio]
env_auth = false
access_key_id = WLGDGYAQYIGI833EV05A
secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region = us-east-1
endpoint = http://10.0.0.3:9000
location_constraint =
server_side_encryption =
Minio doesn't support all the features of S3 yet. In particular it
doesn't support MD5 checksums (ETags) or metadata. This means rclone
can't check MD5SUMs or store the modified date. However you can work
around this with the --size-only flag of rclone.
So once set up, for example to copy files into a bucket
rclone --size-only copy /path/to/files minio:bucket
Swift Swift
@ -1934,6 +2103,8 @@ This will guide you through an interactive setup process.
domain> Default domain> Default
Tenant name - optional Tenant name - optional
tenant> tenant>
Tenant domain - optional (v3 auth)
tenant_domain>
Region name - optional Region name - optional
region> region>
Storage URL - optional Storage URL - optional
@ -2302,9 +2473,9 @@ Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory. Paths may be as deep as required, eg remote:directory/subdirectory.
The initial setup for Amazon cloud drive involves getting a token from The initial setup for Amazon Drive involves getting a token from Amazon
Amazon which you need to do in your browser. rclone config walks you which you need to do in your browser. rclone config walks you through
through it. it.
Here is an example of how to make a remote called remote. First run: Here is an example of how to make a remote called remote. First run:
@ -2373,23 +2544,22 @@ temporarily if you are running a host firewall.
Once configured you can then use rclone like this, Once configured you can then use rclone like this,
List directories in top level of your Amazon cloud drive List directories in top level of your Amazon Drive
rclone lsd remote: rclone lsd remote:
List all the files in your Amazon cloud drive List all the files in your Amazon Drive
rclone ls remote: rclone ls remote:
To copy a local directory to an Amazon cloud drive directory called To copy a local directory to an Amazon Drive directory called backup
backup
rclone copy /home/source remote:backup rclone copy /home/source remote:backup
Modified time and MD5SUMs Modified time and MD5SUMs
Amazon cloud drive doesn't allow modification times to be changed via Amazon Drive doesn't allow modification times to be changed via the API
the API so these won't be accurate or used for syncing. so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the It does store MD5SUMs so for a more accurate sync, you can use the
--checksum flag. --checksum flag.
@ -2399,7 +2569,7 @@ Deleting files
Any files you delete with rclone will end up in the trash. Amazon don't Any files you delete with rclone will end up in the trash. Amazon don't
provide an API to permanently delete files, nor to empty the trash, so provide an API to permanently delete files, nor to empty the trash, so
you will have to do that with one of Amazon's apps or via the Amazon you will have to do that with one of Amazon's apps or via the Amazon
cloud drive website. Drive website.
Specific options Specific options
@ -2408,8 +2578,8 @@ Here are the command line options specific to this cloud storage system.
--acd-templink-threshold=SIZE --acd-templink-threshold=SIZE
Files this size or more will be downloaded via their tempLink. This is Files this size or more will be downloaded via their tempLink. This is
to work around a problem with Amazon Drive which blocks downloads to work around a problem with Amazon Drive which blocks downloads of
of files bigger than about 10GB. The default for this is 9GB which files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed. shouldn't need to be changed.
To download files above this threshold, rclone requests a tempLink which To download files above this threshold, rclone requests a tempLink which
@ -2418,17 +2588,17 @@ S3 storage.
Limitations Limitations
Note that Amazon cloud drive is case insensitive so you can't have a Note that Amazon Drive is case insensitive so you can't have a file
file called "Hello.doc" and one called "hello.doc". called "Hello.doc" and one called "hello.doc".
Amazon cloud drive has rate limiting so you may notice errors in the Amazon Drive has rate limiting so you may notice errors in the sync (429
sync (429 errors). rclone will automatically retry the sync up to 3 errors). rclone will automatically retry the sync up to 3 times by
times by default (see --retries flag) which should hopefully work around default (see --retries flag) which should hopefully work around this
this problem. problem.
Amazon cloud drive has an internal limit of file sizes that can be Amazon Drive has an internal limit of file sizes that can be uploaded to
uploaded to the service. This limit is not officially published, but all the service. This limit is not officially published, but all files
files larger than this will fail. larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. This At the time of writing (Jan 2016) is in the area of 50GB per file. This
means that larger files are likely to fail. means that larger files are likely to fail.
@ -2802,19 +2972,6 @@ will be used in the syncing process. You can use the --checksum flag.
Large files which are uploaded in chunks will store their SHA1 on the Large files which are uploaded in chunks will store their SHA1 on the
object as X-Bz-Info-large_file_sha1 as recommended by Backblaze. object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.
Versions
When rclone uploads a new version of a file it creates a new version of
it. Likewise when you delete a file, the old version will still be
available.
The old versions of files are visible in the B2 web interface, but not
via rclone yet.
Rclone doesn't provide any way of managing old versions (downloading
them or deleting them) at the moment. When you purge a bucket, all the
old versions will be deleted.
Transfers Transfers
Backblaze recommends that you do lots of transfers simultaneously for Backblaze recommends that you do lots of transfers simultaneously for
@ -2825,6 +2982,57 @@ hardware, how big the files are, how much you want to load your
computer, etc. The default of --transfers 4 is definitely too low for computer, etc. The default of --transfers 4 is definitely too low for
Backblaze B2 though. Backblaze B2 though.
Versions
When rclone uploads a new version of a file it creates a new version of
it. Likewise when you delete a file, the old version will still be
available.
Old versions of files are visible using the --b2-versions flag.
If you wish to remove all the old versions then you can use the
rclone cleanup remote:bucket command which will delete all the old
versions of files, leaving the current ones intact. You can also supply
a path and only old versions under that path will be deleted, eg
rclone cleanup remote:bucket/path/to/stuff.
When you purge a bucket, the current and the old versions will be
deleted then the bucket will be deleted.
However delete will cause the current versions of the files to become
hidden old versions.
Here is a session showing the listing and and retreival of an old
version followed by a cleanup of the old versions.
Show current version and all the versions with --b2-versions flag.
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
Retreive an old verson
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
Clean up all the old versions and show that they've gone.
$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
Specific options Specific options
Here are the command line options specific to this cloud storage system. Here are the command line options specific to this cloud storage system.
@ -2841,10 +3049,44 @@ Cutoff for switching to chunked upload (default 4.657GiB == 5GB). Files
above this size will be uploaded in chunks of --b2-chunk-size. The above this size will be uploaded in chunks of --b2-chunk-size. The
default value is the largest file which can be uploaded without chunks. default value is the largest file which can be uploaded without chunks.
API --b2-test-mode=FLAG
Here are some notes I made on the backblaze API while integrating it This is for debugging purposes only.
with rclone.
Setting FLAG to one of the strings below will cause b2 to return
specific errors for debugging purposes.
- fail_some_uploads
- expire_some_account_authorization_tokens
- force_cap_exceeded
These will be set in the X-Bz-Test-Mode header which is documented in
the b2 integrations checklist.
--b2-versions
When set rclone will show and act on older versions of files. For
example
Listing without --b2-versions
$ rclone -q ls b2:cleanup-test
9 one.txt
And with
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
Showing that the current version is unchanged but older versions can be
seen. These have the UTC date that they were uploaded to the server to
the nearest millisecond appended to them.
Note that when using --b2-versions no file write operations are
permitted, so you can't upload files or delete them.
Yandex Disk Yandex Disk
@ -3023,6 +3265,52 @@ characters on z, so only use this option if you have to.
Changelog Changelog
- v1.31 - 2016-07-13
- New Features
- Reduce memory on sync by about 50%
- Implement --no-traverse flag to stop copy traversing the
destination remote.
- This can be used to reduce memory usage down to the
smallest possible.
- Useful to copy a small number of files into a large
destination folder.
- Implement cleanup command for emptying trash / removing old
versions of files
- Currently B2 only
- Single file handling improved
- Now copied with --files-from
- Automatically sets --no-traverse when copying a single file
- Info on using installing with ansible - thanks Stefan Weichinger
- Implement --no-update-modtime flag to stop rclone fixing the
remote modified times.
- Bug Fixes
- Fix move command - stop it running for overlapping Fses - this
was causing data loss.
- Local
- Fix incomplete hashes - this was causing problems for B2.
- Amazon Drive
- Rename Amazon Cloud Drive to Amazon Drive - no changes to config
file needed.
- Swift
- Add support for non-default project domain - thanks
Antonio Messina.
- S3
- Add instructions on how to use rclone with minio.
- Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
- Skip setting the modified time for objects > 5GB as it
isn't possible.
- Backblaze B2
- Add --b2-versions flag so old versions can be listed
and retreived.
- Treat 403 errors (eg cap exceeded) as fatal.
- Implement cleanup command for deleting old file versions.
- Make error handling compliant with B2 integrations notes.
- Fix handling of token expiry.
- Implement --b2-test-mode to set X-Bz-Test-Mode header.
- Set cutoff for chunked upload to 200MB as per B2 guidelines.
- Make upload multi-threaded.
- Dropbox
- Don't retry 461 errors.
- v1.30 - 2016-06-18 - v1.30 - 2016-06-18
- New Features - New Features
- Directory listing code reworked for more features and better - Directory listing code reworked for more features and better
@ -3639,6 +3927,8 @@ Contributors
- Leigh Klotz klotz@quixey.com - Leigh Klotz klotz@quixey.com
- Romain Lapray lapray.romain@gmail.com - Romain Lapray lapray.romain@gmail.com
- Justin R. Wilson jrw972@gmail.com - Justin R. Wilson jrw972@gmail.com
- Antonio Messina antonio.s.messina@gmail.com
- Stefan G. Weichinger office@oops.co.at
Contact the rclone project Contact the rclone project

View File

@ -1,12 +1,48 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Changelog" description: "Rclone Changelog"
date: "2016-06-18" date: "2016-07-13"
--- ---
Changelog Changelog
--------- ---------
* v1.31 - 2016-07-13
* New Features
* Reduce memory on sync by about 50%
* Implement --no-traverse flag to stop copy traversing the destination remote.
* This can be used to reduce memory usage down to the smallest possible.
* Useful to copy a small number of files into a large destination folder.
* Implement cleanup command for emptying trash / removing old versions of files
* Currently B2 only
* Single file handling improved
* Now copied with --files-from
* Automatically sets --no-traverse when copying a single file
* Info on using installing with ansible - thanks Stefan Weichinger
* Implement --no-update-modtime flag to stop rclone fixing the remote modified times.
* Bug Fixes
* Fix move command - stop it running for overlapping Fses - this was causing data loss.
* Local
* Fix incomplete hashes - this was causing problems for B2.
* Amazon Drive
* Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.
* Swift
* Add support for non-default project domain - thanks Antonio Messina.
* S3
* Add instructions on how to use rclone with minio.
* Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
* Skip setting the modified time for objects > 5GB as it isn't possible.
* Backblaze B2
* Add --b2-versions flag so old versions can be listed and retreived.
* Treat 403 errors (eg cap exceeded) as fatal.
* Implement cleanup command for deleting old file versions.
* Make error handling compliant with B2 integrations notes.
* Fix handling of token expiry.
* Implement --b2-test-mode to set `X-Bz-Test-Mode` header.
* Set cutoff for chunked upload to 200MB as per B2 guidelines.
* Make upload multi-threaded.
* Dropbox
* Don't retry 461 errors.
* v1.30 - 2016-06-18 * v1.30 - 2016-06-18
* New Features * New Features
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables * Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables

View File

@ -2,40 +2,40 @@
title: "Rclone downloads" title: "Rclone downloads"
description: "Download rclone binaries for your OS." description: "Download rclone binaries for your OS."
type: page type: page
date: "2016-06-18" date: "2016-07-13"
--- ---
Rclone Download v1.30 Rclone Download v1.31
===================== =====================
* Windows * Windows
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-windows-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.31-windows-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-windows-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-windows-amd64.zip)
* OSX * OSX
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-osx-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.31-osx-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-osx-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-osx-amd64.zip)
* Linux * Linux
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-linux-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.31-linux-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-linux-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-linux-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.30-linux-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.31-linux-arm.zip)
* FreeBSD * FreeBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-freebsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.31-freebsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-freebsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-freebsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.30-freebsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.31-freebsd-arm.zip)
* NetBSD * NetBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-netbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.31-netbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-netbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-netbsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.30-netbsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.31-netbsd-arm.zip)
* OpenBSD * OpenBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-openbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.31-openbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-openbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-openbsd-amd64.zip)
* Plan 9 * Plan 9
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-plan9-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.31-plan9-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-plan9-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-plan9-amd64.zip)
* Solaris * Solaris
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-solaris-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.31-solaris-amd64.zip)
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.30). You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.31).
Downloads for scripting Downloads for scripting
======================= =======================

View File

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
var Version = "v1.30" var Version = "v1.31"

521
rclone.1
View File

@ -1,7 +1,7 @@
.\"t .\"t
.\" Automatically generated by Pandoc 1.16.0.2 .\" Automatically generated by Pandoc 1.16.0.2
.\" .\"
.TH "rclone" "1" "Jun 18, 2016" "User Manual" "" .TH "rclone" "1" "Jul 13, 2016" "User Manual" ""
.hy .hy
.SH Rclone .SH Rclone
.PP .PP
@ -99,7 +99,26 @@ sudo\ chmod\ 755\ /usr/sbin/rclone
#install\ manpage #install\ manpage
sudo\ mkdir\ \-p\ /usr/local/share/man/man1 sudo\ mkdir\ \-p\ /usr/local/share/man/man1
sudo\ cp\ rclone.1\ /usr/local/share/man/man1/ sudo\ cp\ rclone.1\ /usr/local/share/man/man1/
sudo\ mandb sudo\ mandb\
\f[]
.fi
.SS Installation with Ansible
.PP
This can be done with Stefan Weichinger\[aq]s ansible
role (https://github.com/stefangweichinger/ansible-rclone).
.PP
Instructions
.IP "1." 3
\f[C]git\ clone\ https://github.com/stefangweichinger/ansible\-rclone.git\f[]
into your local roles\-directory
.IP "2." 3
add the role to the hosts you want rclone installed to:
.IP
.nf
\f[C]
\ \ \ \ \-\ hosts:\ rclone\-hosts
\ \ \ \ \ \ roles:
\ \ \ \ \ \ \ \ \ \ \-\ rclone
\f[] \f[]
.fi .fi
.SS Configure .SS Configure
@ -216,6 +235,9 @@ had written a trailing / \- meaning "copy the contents of this
directory". directory".
This applies to all commands and whether you are talking about the This applies to all commands and whether you are talking about the
source or destination. source or destination.
.PP
See the \f[C]\-\-no\-traverse\f[] option for controlling whether rclone
lists the destination directory or not.
.SS rclone sync source:path dest:path .SS rclone sync source:path dest:path
.PP .PP
Sync the source to the destination, changing the destination only. Sync the source to the destination, changing the destination only.
@ -240,16 +262,18 @@ If dest:path doesn\[aq]t exist, it is created and the source:path
contents go there. contents go there.
.SS move source:path dest:path .SS move source:path dest:path
.PP .PP
Moves the source to the destination. Moves the contents of the source directory to the destination directory.
Rclone will error if the source and destination overlap.
.PP .PP
If there are no filters in use this is equivalent to a copy followed by If no filters are in use and if possible this will server side move
a purge, but may use server side operations to speed it up if possible. \f[C]source:path\f[] into \f[C]dest:path\f[].
After this \f[C]source:path\f[] will no longer longer exist.
.PP .PP
If filters are in use then it is equivalent to a copy followed by Otherwise for each file in \f[C]source:path\f[] selected by the filters
delete, followed by an rmdir (which only removes the directory if (if any) this will move it into \f[C]dest:path\f[].
empty). If possible a server side move will be used, otherwise it will copy it
The individual file moves will be moved with server side operations if (server side if possible) into \f[C]dest:path\f[] then delete the
possible. original (if no errors on copy) in \f[C]source:path\f[].
.PP .PP
\f[B]Important\f[]: Since this can cause data loss, test first with the \f[B]Important\f[]: Since this can cause data loss, test first with the
\-\-dry\-run flag. \-\-dry\-run flag.
@ -325,6 +349,11 @@ It doesn\[aq]t alter the source or destination.
.PP .PP
\f[C]\-\-size\-only\f[] may be used to only compare the sizes, not the \f[C]\-\-size\-only\f[] may be used to only compare the sizes, not the
MD5SUMs. MD5SUMs.
.SS rclone cleanup remote:path
.PP
Clean up the remote if possible.
Empty the trash or delete old file versions.
Not supported by all remotes.
.SS rclone dedupe remote:path .SS rclone dedupe remote:path
.PP .PP
By default \f[C]dedup\f[] interactively finds duplicate files and offers By default \f[C]dedup\f[] interactively finds duplicate files and offers
@ -433,6 +462,47 @@ Enter an interactive configuration session.
.SS rclone help .SS rclone help
.PP .PP
Prints help on rclone commands and options. Prints help on rclone commands and options.
.SS Copying single files
.PP
rclone normally syncs or copies directories.
However if the source remote points to a file, rclone will just copy
that file.
The destination remote must point to a directory \- rclone will give the
error
\f[C]Failed\ to\ create\ file\ system\ for\ "remote:file":\ is\ a\ file\ not\ a\ directory\f[]
if it isn\[aq]t.
.PP
For example, suppose you have a remote with a file in called
\f[C]test.jpg\f[], then you could copy just that file like this
.IP
.nf
\f[C]
rclone\ copy\ remote:test.jpg\ /tmp/download
\f[]
.fi
.PP
The file \f[C]test.jpg\f[] will be placed inside \f[C]/tmp/download\f[].
.PP
This is equivalent to specifying
.IP
.nf
\f[C]
rclone\ copy\ \-\-no\-traverse\ \-\-files\-from\ /tmp/files\ remote:\ /tmp/download
\f[]
.fi
.PP
Where \f[C]/tmp/files\f[] contains the single line
.IP
.nf
\f[C]
test.jpg
\f[]
.fi
.PP
It is recommended to use \f[C]copy\f[] when copying single files not
\f[C]sync\f[].
They have pretty much the same effect but \f[C]copy\f[] will use a lot
less memory.
.SS Quoting and the shell .SS Quoting and the shell
.PP .PP
When you are typing commands to your computer you are using something When you are typing commands to your computer you are using something
@ -699,6 +769,13 @@ Useful if you\[aq]ve set the server to return files with
.PP .PP
There is no need to set this in normal operation, and doing so will There is no need to set this in normal operation, and doing so will
decrease the network transfer efficiency of rclone. decrease the network transfer efficiency of rclone.
.SS \-\-no\-update\-modtime
.PP
When using this flag, rclone won\[aq]t update modification times of
remote files if they are incorrect as it would normally.
.PP
This can be used if the remote is being synced with another tool also
(eg the Google Drive client).
.SS \-q, \-\-quiet .SS \-q, \-\-quiet
.PP .PP
Normally rclone outputs stats and a completion message. Normally rclone outputs stats and a completion message.
@ -736,10 +813,14 @@ deleted when you sync folders.
Specifying the value \f[C]\-\-delete\-before\f[] will delete all files Specifying the value \f[C]\-\-delete\-before\f[] will delete all files
present on the destination, but not on the source \f[I]before\f[] present on the destination, but not on the source \f[I]before\f[]
starting the transfer of any new or updated files. starting the transfer of any new or updated files.
This uses extra memory as it has to store the source listing before
proceeding.
.PP .PP
Specifying \f[C]\-\-delete\-during\f[] (default value) will delete files Specifying \f[C]\-\-delete\-during\f[] (default value) will delete files
while checking and uploading files. while checking and uploading files.
This is usually the fastest option. This is usually the fastest option.
Currently this works the same as \f[C]\-\-delete\-after\f[] but it may
change in the future.
.PP .PP
Specifying \f[C]\-\-delete\-after\f[] will delay deletion of files until Specifying \f[C]\-\-delete\-after\f[] will delay deletion of files until
all new/updated files have been successfully transfered. all new/updated files have been successfully transfered.
@ -909,6 +990,25 @@ In this mode, TLS is susceptible to man\-in\-the\-middle attacks.
This option defaults to \f[C]false\f[]. This option defaults to \f[C]false\f[].
.PP .PP
\f[B]This should be used only for testing.\f[] \f[B]This should be used only for testing.\f[]
.SS \-\-no\-traverse
.PP
The \f[C]\-\-no\-traverse\f[] flag controls whether the destination file
system is traversed when using the \f[C]copy\f[] or \f[C]move\f[]
commands.
.PP
If you are only copying a small number of files and/or have a large
number of files on the destination then \f[C]\-\-no\-traverse\f[] will
stop rclone listing the destination and save time.
.PP
However if you are copying a large number of files, escpecially if you
are doing a copy where lots of the files haven\[aq]t changed and
won\[aq]t need copying then you shouldn\[aq]t use
\f[C]\-\-no\-traverse\f[].
.PP
It can also be used to reduce the memory usage of rclone when copying \-
\f[C]rclone\ \-\-no\-traverse\ copy\ src\ dst\f[] won\[aq]t load either
the source or destination listings into memory so will use the minimum
amount of memory.
.SS Filtering .SS Filtering
.PP .PP
For the filtering options For the filtering options
@ -1256,8 +1356,8 @@ non \f[C]*.jpg\f[] and \f[C]*.png\f[]
A similar process is done on directory entries before recursing into A similar process is done on directory entries before recursing into
them. them.
This only works on remotes which have a concept of directory (Eg local, This only works on remotes which have a concept of directory (Eg local,
drive, onedrive, amazon cloud drive) and not on bucket based remotes (eg google drive, onedrive, amazon drive) and not on bucket based remotes
s3, swift, google compute storage, b2). (eg s3, swift, google compute storage, b2).
.SS Adding filtering rules .SS Adding filtering rules
.PP .PP
Filtering rules are added with the following command line flags. Filtering rules are added with the following command line flags.
@ -1383,6 +1483,62 @@ file2.jpg
Then use as \f[C]\-\-files\-from\ files\-from.txt\f[]. Then use as \f[C]\-\-files\-from\ files\-from.txt\f[].
This will only transfer \f[C]file1.jpg\f[] and \f[C]file2.jpg\f[] This will only transfer \f[C]file1.jpg\f[] and \f[C]file2.jpg\f[]
providing they exist. providing they exist.
.PP
For example, let\[aq]s say you had a few files you want to back up
regularly with these absolute paths:
.IP
.nf
\f[C]
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
\f[]
.fi
.PP
To copy these you\[aq]d find a common subdirectory \- in this case
\f[C]/home\f[] and put the remaining files in \f[C]files\-from.txt\f[]
with or without leading \f[C]/\f[], eg
.IP
.nf
\f[C]
user1/important
user1/dir/file
user2/stuff
\f[]
.fi
.PP
You could then copy these to a remote like this
.IP
.nf
\f[C]
rclone\ copy\ \-\-files\-from\ files\-from.txt\ /home\ remote:backup
\f[]
.fi
.PP
The 3 files will arrive in \f[C]remote:backup\f[] with the paths as in
the \f[C]files\-from.txt\f[].
.PP
You could of course choose \f[C]/\f[] as the root too in which case your
\f[C]files\-from.txt\f[] might look like this.
.IP
.nf
\f[C]
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
\f[]
.fi
.PP
And you would transfer it like this
.IP
.nf
\f[C]
rclone\ copy\ \-\-files\-from\ files\-from.txt\ /\ remote:backup
\f[]
.fi
.PP
In this case there will be an extra \f[C]home\f[] directory on the
remote.
.SS \f[C]\-\-min\-size\f[] \- Don\[aq]t transfer any file smaller than .SS \f[C]\-\-min\-size\f[] \- Don\[aq]t transfer any file smaller than
this this
.PP .PP
@ -1718,7 +1874,7 @@ e/n/d/q>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -2038,7 +2194,7 @@ n/s>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -2336,6 +2492,75 @@ removed).
Because this is a json dump, it is encoding the \f[C]/\f[] as Because this is a json dump, it is encoding the \f[C]/\f[] as
\f[C]\\/\f[], so if you use the secret key as \f[C]xxxxxx/xxxx\f[] it \f[C]\\/\f[], so if you use the secret key as \f[C]xxxxxx/xxxx\f[] it
will work fine. will work fine.
.SS Minio
.PP
Minio (https://minio.io/) is an object storage server built for cloud
application developers and devops.
.PP
It is very easy to install and provides an S3 compatible server which
can be used by rclone.
.PP
To use it, install Minio following the instructions from the web site.
.PP
When it configures itself Minio will print something like this
.IP
.nf
\f[C]
AccessKey:\ WLGDGYAQYIGI833EV05A\ \ SecretKey:\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF\ Region:\ us\-east\-1
Minio\ Object\ Storage:
\ \ \ \ \ http://127.0.0.1:9000
\ \ \ \ \ http://10.0.0.3:9000
Minio\ Browser:
\ \ \ \ \ http://127.0.0.1:9000
\ \ \ \ \ http://10.0.0.3:9000
\f[]
.fi
.PP
These details need to go into \f[C]rclone\ config\f[] like this.
Note that it is important to put the region in as stated above.
.IP
.nf
\f[C]
env_auth>\ 1
access_key_id>\ WLGDGYAQYIGI833EV05A
secret_access_key>\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF\ \ \
region>\ us\-east\-1
endpoint>\ http://10.0.0.3:9000
location_constraint>\
server_side_encryption>
\f[]
.fi
.PP
Which makes the config file look like this
.IP
.nf
\f[C]
[minio]
env_auth\ =\ false
access_key_id\ =\ WLGDGYAQYIGI833EV05A
secret_access_key\ =\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region\ =\ us\-east\-1
endpoint\ =\ http://10.0.0.3:9000
location_constraint\ =\
server_side_encryption\ =\
\f[]
.fi
.PP
Minio doesn\[aq]t support all the features of S3 yet.
In particular it doesn\[aq]t support MD5 checksums (ETags) or metadata.
This means rclone can\[aq]t check MD5SUMs or store the modified date.
However you can work around this with the \f[C]\-\-size\-only\f[] flag
of rclone.
.PP
So once set up, for example to copy files into a bucket
.IP
.nf
\f[C]
rclone\ \-\-size\-only\ copy\ /path/to/files\ minio:bucket
\f[]
.fi
.SS Swift .SS Swift
.PP .PP
Swift refers to Openstack Object Swift refers to Openstack Object
@ -2370,7 +2595,7 @@ n/s>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -2416,6 +2641,8 @@ User\ domain\ \-\ optional\ (v3\ auth)
domain>\ Default domain>\ Default
Tenant\ name\ \-\ optional Tenant\ name\ \-\ optional
tenant>\ tenant>\
Tenant\ domain\ \-\ optional\ (v3\ auth)
tenant_domain>
Region\ name\ \-\ optional Region\ name\ \-\ optional
region>\ region>\
Storage\ URL\ \-\ optional Storage\ URL\ \-\ optional
@ -2538,7 +2765,7 @@ e/n/d/q>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -2680,7 +2907,7 @@ e/n/d/q>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -2847,8 +3074,8 @@ Paths are specified as \f[C]remote:path\f[]
Paths may be as deep as required, eg Paths may be as deep as required, eg
\f[C]remote:directory/subdirectory\f[]. \f[C]remote:directory/subdirectory\f[].
.PP .PP
The initial setup for Amazon cloud drive involves getting a token from The initial setup for Amazon Drive involves getting a token from Amazon
Amazon which you need to do in your browser. which you need to do in your browser.
\f[C]rclone\ config\f[] walks you through it. \f[C]rclone\ config\f[] walks you through it.
.PP .PP
Here is an example of how to make a remote called \f[C]remote\f[]. Here is an example of how to make a remote called \f[C]remote\f[].
@ -2871,7 +3098,7 @@ e/n/d/q>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -2928,7 +3155,7 @@ to unblock it temporarily if you are running a host firewall.
.PP .PP
Once configured you can then use \f[C]rclone\f[] like this, Once configured you can then use \f[C]rclone\f[] like this,
.PP .PP
List directories in top level of your Amazon cloud drive List directories in top level of your Amazon Drive
.IP .IP
.nf .nf
\f[C] \f[C]
@ -2936,7 +3163,7 @@ rclone\ lsd\ remote:
\f[] \f[]
.fi .fi
.PP .PP
List all the files in your Amazon cloud drive List all the files in your Amazon Drive
.IP .IP
.nf .nf
\f[C] \f[C]
@ -2944,8 +3171,7 @@ rclone\ ls\ remote:
\f[] \f[]
.fi .fi
.PP .PP
To copy a local directory to an Amazon cloud drive directory called To copy a local directory to an Amazon Drive directory called backup
backup
.IP .IP
.nf .nf
\f[C] \f[C]
@ -2954,8 +3180,8 @@ rclone\ copy\ /home/source\ remote:backup
.fi .fi
.SS Modified time and MD5SUMs .SS Modified time and MD5SUMs
.PP .PP
Amazon cloud drive doesn\[aq]t allow modification times to be changed Amazon Drive doesn\[aq]t allow modification times to be changed via the
via the API so these won\[aq]t be accurate or used for syncing. API so these won\[aq]t be accurate or used for syncing.
.PP .PP
It does store MD5SUMs so for a more accurate sync, you can use the It does store MD5SUMs so for a more accurate sync, you can use the
\f[C]\-\-checksum\f[] flag. \f[C]\-\-checksum\f[] flag.
@ -2964,7 +3190,7 @@ It does store MD5SUMs so for a more accurate sync, you can use the
Any files you delete with rclone will end up in the trash. Any files you delete with rclone will end up in the trash.
Amazon don\[aq]t provide an API to permanently delete files, nor to Amazon don\[aq]t provide an API to permanently delete files, nor to
empty the trash, so you will have to do that with one of Amazon\[aq]s empty the trash, so you will have to do that with one of Amazon\[aq]s
apps or via the Amazon cloud drive website. apps or via the Amazon Drive website.
.SS Specific options .SS Specific options
.PP .PP
Here are the command line options specific to this cloud storage system. Here are the command line options specific to this cloud storage system.
@ -2980,17 +3206,17 @@ To download files above this threshold, rclone requests a
directly from the underlying S3 storage. directly from the underlying S3 storage.
.SS Limitations .SS Limitations
.PP .PP
Note that Amazon cloud drive is case insensitive so you can\[aq]t have a Note that Amazon Drive is case insensitive so you can\[aq]t have a file
file called "Hello.doc" and one called "hello.doc". called "Hello.doc" and one called "hello.doc".
.PP .PP
Amazon cloud drive has rate limiting so you may notice errors in the Amazon Drive has rate limiting so you may notice errors in the sync (429
sync (429 errors). errors).
rclone will automatically retry the sync up to 3 times by default (see rclone will automatically retry the sync up to 3 times by default (see
\f[C]\-\-retries\f[] flag) which should hopefully work around this \f[C]\-\-retries\f[] flag) which should hopefully work around this
problem. problem.
.PP .PP
Amazon cloud drive has an internal limit of file sizes that can be Amazon Drive has an internal limit of file sizes that can be uploaded to
uploaded to the service. the service.
This limit is not officially published, but all files larger than this This limit is not officially published, but all files larger than this
will fail. will fail.
.PP .PP
@ -3033,7 +3259,7 @@ n/s>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -3193,7 +3419,7 @@ n/s>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -3339,7 +3565,7 @@ n/q>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -3436,19 +3662,6 @@ You can use the \f[C]\-\-checksum\f[] flag.
Large files which are uploaded in chunks will store their SHA1 on the Large files which are uploaded in chunks will store their SHA1 on the
object as \f[C]X\-Bz\-Info\-large_file_sha1\f[] as recommended by object as \f[C]X\-Bz\-Info\-large_file_sha1\f[] as recommended by
Backblaze. Backblaze.
.SS Versions
.PP
When rclone uploads a new version of a file it creates a new version of
it (https://www.backblaze.com/b2/docs/file_versions.html).
Likewise when you delete a file, the old version will still be
available.
.PP
The old versions of files are visible in the B2 web interface, but not
via rclone yet.
.PP
Rclone doesn\[aq]t provide any way of managing old versions (downloading
them or deleting them) at the moment.
When you \f[C]purge\f[] a bucket, all the old versions will be deleted.
.SS Transfers .SS Transfers
.PP .PP
Backblaze recommends that you do lots of transfers simultaneously for Backblaze recommends that you do lots of transfers simultaneously for
@ -3460,6 +3673,71 @@ The optimum number for you may vary depending on your hardware, how big
the files are, how much you want to load your computer, etc. the files are, how much you want to load your computer, etc.
The default of \f[C]\-\-transfers\ 4\f[] is definitely too low for The default of \f[C]\-\-transfers\ 4\f[] is definitely too low for
Backblaze B2 though. Backblaze B2 though.
.SS Versions
.PP
When rclone uploads a new version of a file it creates a new version of
it (https://www.backblaze.com/b2/docs/file_versions.html).
Likewise when you delete a file, the old version will still be
available.
.PP
Old versions of files are visible using the \f[C]\-\-b2\-versions\f[]
flag.
.PP
If you wish to remove all the old versions then you can use the
\f[C]rclone\ cleanup\ remote:bucket\f[] command which will delete all
the old versions of files, leaving the current ones intact.
You can also supply a path and only old versions under that path will be
deleted, eg \f[C]rclone\ cleanup\ remote:bucket/path/to/stuff\f[].
.PP
When you \f[C]purge\f[] a bucket, the current and the old versions will
be deleted then the bucket will be deleted.
.PP
However \f[C]delete\f[] will cause the current versions of the files to
become hidden old versions.
.PP
Here is a session showing the listing and and retreival of an old
version followed by a \f[C]cleanup\f[] of the old versions.
.PP
Show current version and all the versions with \f[C]\-\-b2\-versions\f[]
flag.
.IP
.nf
\f[C]
$\ rclone\ \-q\ ls\ b2:cleanup\-test
\ \ \ \ \ \ \ \ 9\ one.txt
$\ rclone\ \-q\ \-\-b2\-versions\ ls\ b2:cleanup\-test
\ \ \ \ \ \ \ \ 9\ one.txt
\ \ \ \ \ \ \ \ 8\ one\-v2016\-07\-04\-141032\-000.txt
\ \ \ \ \ \ \ 16\ one\-v2016\-07\-04\-141003\-000.txt
\ \ \ \ \ \ \ 15\ one\-v2016\-07\-02\-155621\-000.txt
\f[]
.fi
.PP
Retreive an old verson
.IP
.nf
\f[C]
$\ rclone\ \-q\ \-\-b2\-versions\ copy\ b2:cleanup\-test/one\-v2016\-07\-04\-141003\-000.txt\ /tmp
$\ ls\ \-l\ /tmp/one\-v2016\-07\-04\-141003\-000.txt
\-rw\-rw\-r\-\-\ 1\ ncw\ ncw\ 16\ Jul\ \ 2\ 17:46\ /tmp/one\-v2016\-07\-04\-141003\-000.txt
\f[]
.fi
.PP
Clean up all the old versions and show that they\[aq]ve gone.
.IP
.nf
\f[C]
$\ rclone\ \-q\ cleanup\ b2:cleanup\-test
$\ rclone\ \-q\ ls\ b2:cleanup\-test
\ \ \ \ \ \ \ \ 9\ one.txt
$\ rclone\ \-q\ \-\-b2\-versions\ ls\ b2:cleanup\-test
\ \ \ \ \ \ \ \ 9\ one.txt
\f[]
.fi
.SS Specific options .SS Specific options
.PP .PP
Here are the command line options specific to this cloud storage system. Here are the command line options specific to this cloud storage system.
@ -3475,11 +3753,55 @@ Files above this size will be uploaded in chunks of
\f[C]\-\-b2\-chunk\-size\f[]. \f[C]\-\-b2\-chunk\-size\f[].
The default value is the largest file which can be uploaded without The default value is the largest file which can be uploaded without
chunks. chunks.
.SS API .SS \-\-b2\-test\-mode=FLAG
.PP .PP
Here are some notes I made on the backblaze This is for debugging purposes only.
API (https://gist.github.com/ncw/166dabf352b399f1cc1c) while integrating .PP
it with rclone. Setting FLAG to one of the strings below will cause b2 to return
specific errors for debugging purposes.
.IP \[bu] 2
\f[C]fail_some_uploads\f[]
.IP \[bu] 2
\f[C]expire_some_account_authorization_tokens\f[]
.IP \[bu] 2
\f[C]force_cap_exceeded\f[]
.PP
These will be set in the \f[C]X\-Bz\-Test\-Mode\f[] header which is
documented in the b2 integrations
checklist (https://www.backblaze.com/b2/docs/integration_checklist.html).
.SS \-\-b2\-versions
.PP
When set rclone will show and act on older versions of files.
For example
.PP
Listing without \f[C]\-\-b2\-versions\f[]
.IP
.nf
\f[C]
$\ rclone\ \-q\ ls\ b2:cleanup\-test
\ \ \ \ \ \ \ \ 9\ one.txt
\f[]
.fi
.PP
And with
.IP
.nf
\f[C]
$\ rclone\ \-q\ \-\-b2\-versions\ ls\ b2:cleanup\-test
\ \ \ \ \ \ \ \ 9\ one.txt
\ \ \ \ \ \ \ \ 8\ one\-v2016\-07\-04\-141032\-000.txt
\ \ \ \ \ \ \ 16\ one\-v2016\-07\-04\-141003\-000.txt
\ \ \ \ \ \ \ 15\ one\-v2016\-07\-02\-155621\-000.txt
\f[]
.fi
.PP
Showing that the current version is unchanged but older versions can be
seen.
These have the UTC date that they were uploaded to the server to the
nearest millisecond appended to them.
.PP
Note that when using \f[C]\-\-b2\-versions\f[] no file write operations
are permitted, so you can\[aq]t upload files or delete them.
.SS Yandex Disk .SS Yandex Disk
.PP .PP
Yandex Disk (https://disk.yandex.com) is a cloud storage solution Yandex Disk (https://disk.yandex.com) is a cloud storage solution
@ -3508,7 +3830,7 @@ n/s>\ n
name>\ remote name>\ remote
Type\ of\ storage\ to\ configure. Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Cloud\ Drive \ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive" \ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph) \ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
\ \ \ \\\ "s3" \ \ \ \\\ "s3"
@ -3697,6 +4019,92 @@ Of course this will cause problems if the absolute path length of a file
exceeds 258 characters on z, so only use this option if you have to. exceeds 258 characters on z, so only use this option if you have to.
.SS Changelog .SS Changelog
.IP \[bu] 2 .IP \[bu] 2
v1.31 \- 2016\-07\-13
.RS 2
.IP \[bu] 2
New Features
.IP \[bu] 2
Reduce memory on sync by about 50%
.IP \[bu] 2
Implement \-\-no\-traverse flag to stop copy traversing the destination
remote.
.RS 2
.IP \[bu] 2
This can be used to reduce memory usage down to the smallest possible.
.IP \[bu] 2
Useful to copy a small number of files into a large destination folder.
.RE
.IP \[bu] 2
Implement cleanup command for emptying trash / removing old versions of
files
.RS 2
.IP \[bu] 2
Currently B2 only
.RE
.IP \[bu] 2
Single file handling improved
.RS 2
.IP \[bu] 2
Now copied with \-\-files\-from
.IP \[bu] 2
Automatically sets \-\-no\-traverse when copying a single file
.RE
.IP \[bu] 2
Info on using installing with ansible \- thanks Stefan Weichinger
.IP \[bu] 2
Implement \-\-no\-update\-modtime flag to stop rclone fixing the remote
modified times.
.IP \[bu] 2
Bug Fixes
.IP \[bu] 2
Fix move command \- stop it running for overlapping Fses \- this was
causing data loss.
.IP \[bu] 2
Local
.IP \[bu] 2
Fix incomplete hashes \- this was causing problems for B2.
.IP \[bu] 2
Amazon Drive
.IP \[bu] 2
Rename Amazon Cloud Drive to Amazon Drive \- no changes to config file
needed.
.IP \[bu] 2
Swift
.IP \[bu] 2
Add support for non\-default project domain \- thanks Antonio Messina.
.IP \[bu] 2
S3
.IP \[bu] 2
Add instructions on how to use rclone with minio.
.IP \[bu] 2
Add ap\-northeast\-2 (Seoul) and ap\-south\-1 (Mumbai) regions.
.IP \[bu] 2
Skip setting the modified time for objects > 5GB as it isn\[aq]t
possible.
.IP \[bu] 2
Backblaze B2
.IP \[bu] 2
Add \-\-b2\-versions flag so old versions can be listed and retreived.
.IP \[bu] 2
Treat 403 errors (eg cap exceeded) as fatal.
.IP \[bu] 2
Implement cleanup command for deleting old file versions.
.IP \[bu] 2
Make error handling compliant with B2 integrations notes.
.IP \[bu] 2
Fix handling of token expiry.
.IP \[bu] 2
Implement \-\-b2\-test\-mode to set \f[C]X\-Bz\-Test\-Mode\f[] header.
.IP \[bu] 2
Set cutoff for chunked upload to 200MB as per B2 guidelines.
.IP \[bu] 2
Make upload multi\-threaded.
.IP \[bu] 2
Dropbox
.IP \[bu] 2
Don\[aq]t retry 461 errors.
.RE
.IP \[bu] 2
v1.30 \- 2016\-06\-18 v1.30 \- 2016\-06\-18
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
@ -4806,6 +5214,11 @@ Romain Lapray <lapray.romain@gmail.com>
.IP \[bu] 2 .IP \[bu] 2
Justin R. Justin R.
Wilson <jrw972@gmail.com> Wilson <jrw972@gmail.com>
.IP \[bu] 2
Antonio Messina <antonio.s.messina@gmail.com>
.IP \[bu] 2
Stefan G.
Weichinger <office@oops.co.at>
.SS Contact the rclone project .SS Contact the rclone project
.PP .PP
The project website is at: The project website is at: