Version v1.34

This commit is contained in:
Nick Craig-Wood 2016-11-06 10:17:52 +00:00
parent b83f7ac06b
commit d95288175f
32 changed files with 4179 additions and 1813 deletions

View File

@ -12,7 +12,7 @@
<div id="header"> <div id="header">
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<h2 class="author">Nick Craig-Wood</h2> <h2 class="author">Nick Craig-Wood</h2>
<h3 class="date">Aug 24, 2016</h3> <h3 class="date">Nov 06, 2016</h3>
</div> </div>
<h1 id="rclone">Rclone</h1> <h1 id="rclone">Rclone</h1>
<p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p> <p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p>
@ -50,25 +50,47 @@
</li></li> </li></li>
<li><a href="http://rclone.org/downloads/">Downloads</a></li> <li><a href="http://rclone.org/downloads/">Downloads</a></li>
</ul> </ul>
<h2 id="install">Install</h2> <h1 id="install">Install</h1>
<p>Rclone is a Go program and comes as a single binary file.</p> <p>Rclone is a Go program and comes as a single binary file.</p>
<p><a href="http://rclone.org/downloads/">Download</a> the relevant binary.</p> <h2 id="quickstart">Quickstart</h2>
<p>Or alternatively if you have Go 1.5+ installed use</p> <ul>
<pre><code>go get github.com/ncw/rclone</code></pre> <li><a href="http://rclone.org/downloads/">Download</a> the relevant binary.</li>
<p>and this will build the binary in <code>$GOPATH/bin</code>. If you have built rclone before then you will want to update its dependencies first with this</p> <li>Unpack and the <code>rclone</code> binary.</li>
<pre><code>go get -u -v github.com/ncw/rclone/...</code></pre> <li>Run <code>rclone config</code> to setup. See <a href="http://rclone.org/docs/">rclone config docs</a> for more details.</li>
</ul>
<p>See below for some expanded Linux / macOS instructions.</p>
<p>See the <a href="http://rclone.org/docs/">Usage section</a> of the docs for how to use rclone, or run <code>rclone -h</code>.</p> <p>See the <a href="http://rclone.org/docs/">Usage section</a> of the docs for how to use rclone, or run <code>rclone -h</code>.</p>
<h2 id="linux-binary-downloaded-files-install-example">linux binary downloaded files install example</h2> <h2 id="linux-installation-from-precompiled-binary">Linux installation from precompiled binary</h2>
<pre><code>unzip rclone-v1.17-linux-amd64.zip <p>Fetch and unpack</p>
cd rclone-v1.17-linux-amd64 <pre><code>curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip
#copy binary file unzip rclone-current-linux-amd64.zip
sudo cp rclone /usr/sbin/ cd rclone-*-linux-amd64</code></pre>
<p>Copy binary file</p>
<pre><code>sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone sudo chmod 755 /usr/sbin/rclone</code></pre>
#install manpage <p>Install manpage</p>
sudo mkdir -p /usr/local/share/man/man1 <pre><code>sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/ sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb </code></pre> sudo mandb </code></pre>
<p>Run <code>rclone config</code> to setup. See <a href="http://rclone.org/docs/">rclone config docs</a> for more details.</p>
<pre><code>rclone config</code></pre>
<h2 id="macos-installation-from-precompiled-binary">macOS installation from precompiled binary</h2>
<p>Download the latest version of rclone.</p>
<pre><code>cd &amp;&amp; curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip</code></pre>
<p>Unzip the download and cd to the extracted folder.</p>
<pre><code>unzip -a rclone-current-osx-amd64.zip &amp;&amp; cd rclone-*-osx-amd64</code></pre>
<p>Move rclone to your $PATH. You will be prompted for your password.</p>
<pre><code>sudo mv rclone /usr/local/bin/</code></pre>
<p>Remove the leftover files.</p>
<pre><code>cd .. &amp;&amp; rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip</code></pre>
<p>Run <code>rclone config</code> to setup. See <a href="http://rclone.org/docs/">rclone config docs</a> for more details.</p>
<pre><code>rclone config</code></pre>
<h2 id="install-from-source">Install from source</h2>
<p>Make sure you have at least <a href="https://golang.org/">Go</a> 1.5 installed. Make sure your <code>GOPATH</code> is set, then:</p>
<pre><code>go get -u -v github.com/ncw/rclone</code></pre>
<p>and this will build the binary in <code>$GOPATH/bin</code>. If you have built rclone before then you will want to update its dependencies first with this</p>
<pre><code>go get -u -v github.com/ncw/rclone/...</code></pre>
<h2 id="installation-with-ansible">Installation with Ansible</h2> <h2 id="installation-with-ansible">Installation with Ansible</h2>
<p>This can be done with <a href="https://github.com/stefangweichinger/ansible-rclone">Stefan Weichinger's ansible role</a>.</p> <p>This can be done with <a href="https://github.com/stefangweichinger/ansible-rclone">Stefan Weichinger's ansible role</a>.</p>
<p>Instructions</p> <p>Instructions</p>
@ -286,7 +308,7 @@ two-3.txt: renamed from: two.txt</code></pre>
<pre><code>rclone dedupe rename &quot;drive:Google Photos&quot;</code></pre> <pre><code>rclone dedupe rename &quot;drive:Google Photos&quot;</code></pre>
<pre><code>rclone dedupe [mode] remote:path</code></pre> <pre><code>rclone dedupe [mode] remote:path</code></pre>
<h3 id="options">Options</h3> <h3 id="options">Options</h3>
<pre><code> --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.</code></pre> <pre><code> --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default &quot;interactive&quot;)</code></pre>
<h2 id="rclone-authorize">rclone authorize</h2> <h2 id="rclone-authorize">rclone authorize</h2>
<p>Remote authorization.</p> <p>Remote authorization.</p>
<h3 id="synopsis-18">Synopsis</h3> <h3 id="synopsis-18">Synopsis</h3>
@ -318,9 +340,17 @@ two-3.txt: renamed from: two.txt</code></pre>
<h3 id="synopsis-21">Synopsis</h3> <h3 id="synopsis-21">Synopsis</h3>
<p>This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.</p> <p>This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.</p>
<pre><code>rclone gendocs output_directory</code></pre> <pre><code>rclone gendocs output_directory</code></pre>
<h2 id="rclone-listremotes">rclone listremotes</h2>
<p>List all the remotes in the config file.</p>
<h3 id="synopsis-22">Synopsis</h3>
<p>rclone listremotes lists all the available remotes from the config file.</p>
<p>When uses with the -l flag it lists the types too.</p>
<pre><code>rclone listremotes</code></pre>
<h3 id="options-1">Options</h3>
<pre><code> -l, --long Show the type as well as names.</code></pre>
<h2 id="rclone-mount">rclone mount</h2> <h2 id="rclone-mount">rclone mount</h2>
<p>Mount the remote as a mountpoint. <strong>EXPERIMENTAL</strong></p> <p>Mount the remote as a mountpoint. <strong>EXPERIMENTAL</strong></p>
<h3 id="synopsis-22">Synopsis</h3> <h3 id="synopsis-23">Synopsis</h3>
<p>rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.</p> <p>rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.</p>
<p>This is <strong>EXPERIMENTAL</strong> - use with care.</p> <p>This is <strong>EXPERIMENTAL</strong> - use with care.</p>
<p>First set up your remote using <code>rclone config</code>. Check it works with <code>rclone ls</code> etc.</p> <p>First set up your remote using <code>rclone config</code>. Check it works with <code>rclone ls</code> etc.</p>
@ -331,8 +361,8 @@ two-3.txt: renamed from: two.txt</code></pre>
<p>Or with OS X</p> <p>Or with OS X</p>
<pre><code>umount -u /path/to/local/mount</code></pre> <pre><code>umount -u /path/to/local/mount</code></pre>
<h3 id="limitations">Limitations</h3> <h3 id="limitations">Limitations</h3>
<p>This can only read files seqentially, or write files sequentially. It can't read and write or seek in files.</p> <p>This can only write files seqentially, it can only seek when reading.</p>
<p>rclonefs inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.</p> <p>Rclone mount inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.</p>
<p>The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So <code>swift:</code> won't work whereas <code>swift:bucket</code> will as will <code>swift:bucket/path</code>.</p> <p>The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So <code>swift:</code> won't work whereas <code>swift:bucket</code> will as will <code>swift:bucket/path</code>.</p>
<p>Only supported on Linux, FreeBSD and OS X at the moment.</p> <p>Only supported on Linux, FreeBSD and OS X at the moment.</p>
<h3 id="rclone-mount-vs-rclone-synccopy">rclone mount vs rclone sync/copy</h3> <h3 id="rclone-mount-vs-rclone-synccopy">rclone mount vs rclone sync/copy</h3>
@ -341,8 +371,9 @@ two-3.txt: renamed from: two.txt</code></pre>
<ul> <ul>
<li>All the remotes should work for read, but some may not for write <li>All the remotes should work for read, but some may not for write
<ul> <ul>
<li>those which need to know the size in advance won't - eg B2</li> <li>those which need to know the size in advance won't - eg B2, Amazon Drive</li>
<li>maybe should pass in size as -1 to mean work it out</li> <li>maybe should pass in size as -1 to mean work it out</li>
<li>Or put in an an upload cache to cache the files on disk first</li>
</ul></li> </ul></li>
</ul> </ul>
<h3 id="todo">TODO</h3> <h3 id="todo">TODO</h3>
@ -352,9 +383,21 @@ two-3.txt: renamed from: two.txt</code></pre>
<li>Move directories</li> <li>Move directories</li>
</ul> </ul>
<pre><code>rclone mount remote:path /path/to/mountpoint</code></pre> <pre><code>rclone mount remote:path /path/to/mountpoint</code></pre>
<h3 id="options-1">Options</h3> <h3 id="options-2">Options</h3>
<pre><code> --debug-fuse Debug the FUSE internals - needs -v. <pre><code> --allow-non-empty Allow mounting over a non-empty directory.
--no-modtime Don&#39;t read the modification time (can speed things up).</code></pre> --allow-other Allow access to other users.
--allow-root Allow access to root user.
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-modtime Don&#39;t read the modification time (can speed things up).
--no-seek Don&#39;t allow seeking in files.
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.</code></pre>
<h2 id="copying-single-files">Copying single files</h2> <h2 id="copying-single-files">Copying single files</h2>
<p>rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error <code>Failed to create file system for &quot;remote:file&quot;: is a file not a directory</code> if it isn't.</p> <p>rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error <code>Failed to create file system for &quot;remote:file&quot;: is a file not a directory</code> if it isn't.</p>
<p>For example, suppose you have a remote with a file in called <code>test.jpg</code>, then you could copy just that file like this</p> <p>For example, suppose you have a remote with a file in called <code>test.jpg</code>, then you could copy just that file like this</p>
@ -391,7 +434,7 @@ two-3.txt: renamed from: two.txt</code></pre>
<p>This can be used when scripting to make aged backups efficiently, eg</p> <p>This can be used when scripting to make aged backups efficiently, eg</p>
<pre><code>rclone sync remote:current-backup remote:previous-backup <pre><code>rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backup</code></pre> rclone sync /path/to/files remote:current-backup</code></pre>
<h2 id="options-2">Options</h2> <h2 id="options-3">Options</h2>
<p>Rclone has a number of options to control its behaviour.</p> <p>Rclone has a number of options to control its behaviour.</p>
<p>Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as &quot;300ms&quot;, &quot;-1.5h&quot; or &quot;2h45m&quot;. Valid time units are &quot;ns&quot;, &quot;us&quot; (or &quot;µs&quot;), &quot;ms&quot;, &quot;s&quot;, &quot;m&quot;, &quot;h&quot;.</p> <p>Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as &quot;300ms&quot;, &quot;-1.5h&quot; or &quot;2h45m&quot;. Valid time units are &quot;ns&quot;, &quot;us&quot; (or &quot;µs&quot;), &quot;ms&quot;, &quot;s&quot;, &quot;m&quot;, &quot;h&quot;.</p>
<p>Options which use SIZE use kByte by default. However a suffix of <code>b</code> for bytes, <code>k</code> for kBytes, <code>M</code> for MBytes and <code>G</code> for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.</p> <p>Options which use SIZE use kByte by default. However a suffix of <code>b</code> for bytes, <code>k</code> for kBytes, <code>M</code> for MBytes and <code>G</code> for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.</p>
@ -399,6 +442,7 @@ rclone sync /path/to/files remote:current-backup</code></pre>
<p>Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is <code>0</code> which means to not limit bandwidth.</p> <p>Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is <code>0</code> which means to not limit bandwidth.</p>
<p>For example to limit bandwidth usage to 10 MBytes/s use <code>--bwlimit 10M</code></p> <p>For example to limit bandwidth usage to 10 MBytes/s use <code>--bwlimit 10M</code></p>
<p>This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.</p> <p>This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.</p>
<p>Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a <code>--bwlimit 0.625M</code> parameter for rclone.</p>
<h3 id="checkersn">--checkers=N</h3> <h3 id="checkersn">--checkers=N</h3>
<p>The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg s3, swift, dropbox) this can take a significant amount of time so they are run in parallel.</p> <p>The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg s3, swift, dropbox) this can take a significant amount of time so they are run in parallel.</p>
<p>The default is to run 8 checkers in parallel.</p> <p>The default is to run 8 checkers in parallel.</p>
@ -524,12 +568,15 @@ c/u/q&gt;</code></pre>
<p>These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg <code>--drive-test-option</code> - see the docs for the remote in question.</p> <p>These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg <code>--drive-test-option</code> - see the docs for the remote in question.</p>
<h3 id="cpuprofilefile">--cpuprofile=FILE</h3> <h3 id="cpuprofilefile">--cpuprofile=FILE</h3>
<p>Write CPU profile to file. This can be analysed with <code>go tool pprof</code>.</p> <p>Write CPU profile to file. This can be analysed with <code>go tool pprof</code>.</p>
<h3 id="dump-auth">--dump-auth</h3>
<p>Dump HTTP headers - will contain sensitive info such as <code>Authorization:</code> headers - use <code>--dump-headers</code> to dump without <code>Authorization:</code> headers. Can be very verbose. Useful for debugging only.</p>
<h3 id="dump-bodies">--dump-bodies</h3> <h3 id="dump-bodies">--dump-bodies</h3>
<p>Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.</p> <p>Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.</p>
<h3 id="dump-filters">--dump-filters</h3> <h3 id="dump-filters">--dump-filters</h3>
<p>Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.</p> <p>Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.</p>
<h3 id="dump-headers">--dump-headers</h3> <h3 id="dump-headers">--dump-headers</h3>
<p>Dump HTTP headers - may contain sensitive info. Can be very verbose. Useful for debugging only.</p> <p>Dump HTTP headers with <code>Authorization:</code> lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.</p>
<p>Use <code>--dump-auth</code> if you do want the <code>Authorization:</code> headers.</p>
<h3 id="memprofilefile">--memprofile=FILE</h3> <h3 id="memprofilefile">--memprofile=FILE</h3>
<p>Write memory profile to file. This can be analysed with <code>go tool pprof</code>.</p> <p>Write memory profile to file. This can be analysed with <code>go tool pprof</code>.</p>
<h3 id="no-check-certificatetruefalse">--no-check-certificate=true/false</h3> <h3 id="no-check-certificatetruefalse">--no-check-certificate=true/false</h3>
@ -567,7 +614,9 @@ c/u/q&gt;</code></pre>
<p>If you use the <code>-v</code> flag, rclone will produce <code>Error</code>, <code>Info</code> and <code>Debug</code> messages.</p> <p>If you use the <code>-v</code> flag, rclone will produce <code>Error</code>, <code>Info</code> and <code>Debug</code> messages.</p>
<p>If you use the <code>--log-file=FILE</code> option, rclone will redirect <code>Error</code>, <code>Info</code> and <code>Debug</code> messages along with standard error to FILE.</p> <p>If you use the <code>--log-file=FILE</code> option, rclone will redirect <code>Error</code>, <code>Info</code> and <code>Debug</code> messages along with standard error to FILE.</p>
<h2 id="exit-code">Exit Code</h2> <h2 id="exit-code">Exit Code</h2>
<p>If any errors occurred during the command, rclone will set a non zero exit code. This allows scripts to detect when rclone operations have failed.</p> <p>If any errors occurred during the command, rclone with an exit code of <code>1</code>. This allows scripts to detect when rclone operations have failed.</p>
<p>During the startup phase rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.</p>
<p>When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were no transfers with errors remaining. For every error counted there will be a high priority log message (visibile with <code>-q</code>) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.</p>
<h1 id="configuring-rclone-on-a-remote-headless-machine">Configuring rclone on a remote / headless machine</h1> <h1 id="configuring-rclone-on-a-remote-headless-machine">Configuring rclone on a remote / headless machine</h1>
<p>Some of the configurations (those involving oauth2) require an Internet connected web browser.</p> <p>Some of the configurations (those involving oauth2) require an Internet connected web browser.</p>
<p>If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.</p> <p>If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.</p>
@ -664,10 +713,10 @@ y/e/d&gt;</code></pre>
<h3 id="directories">Directories</h3> <h3 id="directories">Directories</h3>
<p>Rclone keeps track of directories that could match any file patterns.</p> <p>Rclone keeps track of directories that could match any file patterns.</p>
<p>Eg if you add the include rule</p> <p>Eg if you add the include rule</p>
<pre><code>\a\*.jpg</code></pre> <pre><code>/a/*.jpg</code></pre>
<p>Rclone will synthesize the directory include rule</p> <p>Rclone will synthesize the directory include rule</p>
<pre><code>\a\</code></pre> <pre><code>/a/</code></pre>
<p>If you put any rules which end in <code>\</code> then it will only match directories.</p> <p>If you put any rules which end in <code>/</code> then it will only match directories.</p>
<p>Directory matches are <strong>only</strong> used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.</p> <p>Directory matches are <strong>only</strong> used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.</p>
<h3 id="differences-between-rsync-and-rclone-patterns">Differences between rsync and rclone patterns</h3> <h3 id="differences-between-rsync-and-rclone-patterns">Differences between rsync and rclone patterns</h3>
<p>Rclone implements bash style <code>{a,b,c}</code> glob matching which rsync doesn't.</p> <p>Rclone implements bash style <code>{a,b,c}</code> glob matching which rsync doesn't.</p>
@ -819,6 +868,7 @@ user2/stuff</code></pre>
<th align="center">ModTime</th> <th align="center">ModTime</th>
<th align="center">Case Insensitive</th> <th align="center">Case Insensitive</th>
<th align="center">Duplicate Files</th> <th align="center">Duplicate Files</th>
<th align="center">MIME Type</th>
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
@ -828,6 +878,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">R/W</td>
</tr> </tr>
<tr class="even"> <tr class="even">
<td align="left">Amazon S3</td> <td align="left">Amazon S3</td>
@ -835,6 +886,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R/W</td>
</tr> </tr>
<tr class="odd"> <tr class="odd">
<td align="left">Openstack Swift</td> <td align="left">Openstack Swift</td>
@ -842,6 +894,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R/W</td>
</tr> </tr>
<tr class="even"> <tr class="even">
<td align="left">Dropbox</td> <td align="left">Dropbox</td>
@ -849,6 +902,7 @@ user2/stuff</code></pre>
<td align="center">No</td> <td align="center">No</td>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R</td>
</tr> </tr>
<tr class="odd"> <tr class="odd">
<td align="left">Google Cloud Storage</td> <td align="left">Google Cloud Storage</td>
@ -856,6 +910,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R/W</td>
</tr> </tr>
<tr class="even"> <tr class="even">
<td align="left">Amazon Drive</td> <td align="left">Amazon Drive</td>
@ -863,6 +918,7 @@ user2/stuff</code></pre>
<td align="center">No</td> <td align="center">No</td>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R</td>
</tr> </tr>
<tr class="odd"> <tr class="odd">
<td align="left">Microsoft One Drive</td> <td align="left">Microsoft One Drive</td>
@ -870,6 +926,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R</td>
</tr> </tr>
<tr class="even"> <tr class="even">
<td align="left">Hubic</td> <td align="left">Hubic</td>
@ -877,6 +934,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R/W</td>
</tr> </tr>
<tr class="odd"> <tr class="odd">
<td align="left">Backblaze B2</td> <td align="left">Backblaze B2</td>
@ -884,6 +942,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R/W</td>
</tr> </tr>
<tr class="even"> <tr class="even">
<td align="left">Yandex Disk</td> <td align="left">Yandex Disk</td>
@ -891,6 +950,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">R/W</td>
</tr> </tr>
<tr class="odd"> <tr class="odd">
<td align="left">The local filesystem</td> <td align="left">The local filesystem</td>
@ -898,6 +958,7 @@ user2/stuff</code></pre>
<td align="center">Yes</td> <td align="center">Yes</td>
<td align="center">Depends</td> <td align="center">Depends</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">-</td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
@ -921,6 +982,129 @@ The hashes are used when transferring data as an integrity check and can be spec
<h3 id="duplicate-files">Duplicate files</h3> <h3 id="duplicate-files">Duplicate files</h3>
<p>If a cloud storage system allows duplicate files then it can have two objects with the same name.</p> <p>If a cloud storage system allows duplicate files then it can have two objects with the same name.</p>
<p>This confuses rclone greatly when syncing - use the <code>rclone dedupe</code> command to rename or remove duplicates.</p> <p>This confuses rclone greatly when syncing - use the <code>rclone dedupe</code> command to rename or remove duplicates.</p>
<h3 id="mime-type">MIME Type</h3>
<p>MIME types (also known as media types) classify types of documents using a simple text classification, eg <code>text/html</code> or <code>application/pdf</code>.</p>
<p>Some cloud storage systems support reading (<code>R</code>) the MIME type of objects and some support writing (<code>W</code>) the MIME type of objects.</p>
<p>The MIME type can be important if you are serving files directly to HTTP from the storage system.</p>
<p>If you are copying from a remote which supports reading (<code>R</code>) to a remote which supports writing (<code>W</code>) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type.</p>
<h2 id="optional-features">Optional Features</h2>
<p>All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient.</p>
<table>
<thead>
<tr class="header">
<th align="left">Name</th>
<th align="center">Purge</th>
<th align="center">Copy</th>
<th align="center">Move</th>
<th align="center">DirMove</th>
<th align="center">CleanUp</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left">Google Drive</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/575">#575</a></td>
</tr>
<tr class="even">
<td align="left">Amazon S3</td>
<td align="center">No</td>
<td align="center">Yes</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No</td>
</tr>
<tr class="odd">
<td align="left">Openstack Swift</td>
<td align="center">Yes †</td>
<td align="center">Yes</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No</td>
</tr>
<tr class="even">
<td align="left">Dropbox</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/575">#575</a></td>
</tr>
<tr class="odd">
<td align="left">Google Cloud Storage</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No</td>
</tr>
<tr class="even">
<td align="left">Amazon Drive</td>
<td align="center">Yes</td>
<td align="center">No</td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/721">#721</a></td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/721">#721</a></td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/575">#575</a></td>
</tr>
<tr class="odd">
<td align="left">Microsoft One Drive</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/197">#197</a></td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/197">#197</a></td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/575">#575</a></td>
</tr>
<tr class="even">
<td align="left">Hubic</td>
<td align="center">Yes †</td>
<td align="center">Yes</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No</td>
</tr>
<tr class="odd">
<td align="left">Backblaze B2</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">Yes</td>
</tr>
<tr class="even">
<td align="left">Yandex Disk</td>
<td align="center">Yes</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No</td>
<td align="center">No <a href="https://github.com/ncw/rclone/issues/575">#575</a></td>
</tr>
<tr class="odd">
<td align="left">The local filesystem</td>
<td align="center">Yes</td>
<td align="center">No</td>
<td align="center">Yes</td>
<td align="center">Yes</td>
<td align="center">No</td>
</tr>
</tbody>
</table>
<h3 id="purge">Purge</h3>
<p>This deletes a directory quicker than just deleting all the files in the directory.</p>
<p>† Note Swift and Hubic implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.</p>
<h3 id="copy">Copy</h3>
<p>Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use <code>rclone copy</code> or <code>rclone move</code> if the remote doesn't support <code>Move</code> directly.</p>
<p>If the server doesn't support <code>Copy</code> directly then for copy operations the file is downloaded then re-uploaded.</p>
<h3 id="move">Move</h3>
<p>Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in <code>rclone move</code> if the server doesn't support <code>DirMove</code>.</p>
<p>If the server isn't capable of <code>Move</code> then rclone simulates it with <code>Copy</code> then delete. If the server doesn't support <code>Copy</code> then rclone will download the file and re-upload it.</p>
<h3 id="dirmove">DirMove</h3>
<p>This is used to implement <code>rclone move</code> to move a directory if possible. If it isn't then it will use <code>Move</code> on each file (which falls back to <code>Copy</code> then download and upload - see <code>Move</code> section).</p>
<h3 id="cleanup">CleanUp</h3>
<p>This is used for emptying the trash for a remote by <code>rclone cleanup</code>.</p>
<p>If the server can't do <code>CleanUp</code> then <code>rclone cleanup</code> will return an error.</p>
<h2 id="google-drive">Google Drive</h2> <h2 id="google-drive">Google Drive</h2>
<p>Paths are specified as <code>drive:path</code></p> <p>Paths are specified as <code>drive:path</code></p>
<p>Drive paths may be as deep as required, eg <code>drive:directory/subdirectory</code>.</p> <p>Drive paths may be as deep as required, eg <code>drive:directory/subdirectory</code>.</p>
@ -1053,15 +1237,25 @@ y/e/d&gt; y</code></pre>
<td align="left">Microsoft Office Document</td> <td align="left">Microsoft Office Document</td>
</tr> </tr>
<tr class="even"> <tr class="even">
<td align="left">epub</td>
<td align="left">application/epub+zip</td>
<td align="left">E-book format</td>
</tr>
<tr class="odd">
<td align="left">html</td> <td align="left">html</td>
<td align="left">text/html</td> <td align="left">text/html</td>
<td align="left">An HTML Document</td> <td align="left">An HTML Document</td>
</tr> </tr>
<tr class="odd"> <tr class="even">
<td align="left">jpg</td> <td align="left">jpg</td>
<td align="left">image/jpeg</td> <td align="left">image/jpeg</td>
<td align="left">A JPEG Image File</td> <td align="left">A JPEG Image File</td>
</tr> </tr>
<tr class="odd">
<td align="left">odp</td>
<td align="left">application/vnd.oasis.opendocument.presentation</td>
<td align="left">Openoffice Presentation</td>
</tr>
<tr class="even"> <tr class="even">
<td align="left">ods</td> <td align="left">ods</td>
<td align="left">application/vnd.oasis.opendocument.spreadsheet</td> <td align="left">application/vnd.oasis.opendocument.spreadsheet</td>
@ -1103,21 +1297,26 @@ y/e/d&gt; y</code></pre>
<td align="left">Scalable Vector Graphics Format</td> <td align="left">Scalable Vector Graphics Format</td>
</tr> </tr>
<tr class="even"> <tr class="even">
<td align="left">tsv</td>
<td align="left">text/tab-separated-values</td>
<td align="left">Standard TSV format for spreadsheets</td>
</tr>
<tr class="odd">
<td align="left">txt</td> <td align="left">txt</td>
<td align="left">text/plain</td> <td align="left">text/plain</td>
<td align="left">Plain Text</td> <td align="left">Plain Text</td>
</tr> </tr>
<tr class="odd"> <tr class="even">
<td align="left">xls</td> <td align="left">xls</td>
<td align="left">application/vnd.ms-excel</td> <td align="left">application/vnd.ms-excel</td>
<td align="left">Microsoft Office Spreadsheet</td> <td align="left">Microsoft Office Spreadsheet</td>
</tr> </tr>
<tr class="even"> <tr class="odd">
<td align="left">xlsx</td> <td align="left">xlsx</td>
<td align="left">application/vnd.openxmlformats-officedocument.spreadsheetml.sheet</td> <td align="left">application/vnd.openxmlformats-officedocument.spreadsheetml.sheet</td>
<td align="left">Microsoft Office Spreadsheet</td> <td align="left">Microsoft Office Spreadsheet</td>
</tr> </tr>
<tr class="odd"> <tr class="even">
<td align="left">zip</td> <td align="left">zip</td>
<td align="left">application/zip</td> <td align="left">application/zip</td>
<td align="left">A ZIP file of HTML, Images CSS</td> <td align="left">A ZIP file of HTML, Images CSS</td>
@ -1274,6 +1473,17 @@ Choose a number from below, or type in your own value
2 / AES256 2 / AES256
\ &quot;AES256&quot; \ &quot;AES256&quot;
server_side_encryption&gt; server_side_encryption&gt;
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ &quot;&quot;
2 / Standard storage class
\ &quot;STANDARD&quot;
3 / Reduced redundancy storage class
\ &quot;REDUCED_REDUNDANCY&quot;
4 / Standard Infrequent Access storage class
\ &quot;STANDARD_IA&quot;
storage_class&gt;
Remote config Remote config
-------------------- --------------------
[remote] [remote]
@ -1318,6 +1528,19 @@ y/e/d&gt; y</code></pre>
<li>Running <code>rclone</code> on an EC2 instance with an IAM role</li> <li>Running <code>rclone</code> on an EC2 instance with an IAM role</li>
</ul> </ul>
<p>If none of these option actually end up providing <code>rclone</code> with AWS credentials then S3 interaction will be non-authenticated (see below).</p> <p>If none of these option actually end up providing <code>rclone</code> with AWS credentials then S3 interaction will be non-authenticated (see below).</p>
<h3 id="specific-options-1">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="s3-aclstring">--s3-acl=STRING</h4>
<p>Canned ACL used when creating buckets and/or storing objects in S3.</p>
<p>For more info visit the <a href="http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl">canned ACL docs</a>.</p>
<h4 id="s3-storage-classstring">--s3-storage-class=STRING</h4>
<p>Storage class to upload new objects with.</p>
<p>Available options include:</p>
<ul>
<li>STANDARD - default storage class</li>
<li>STANDARD_IA - for less frequently accessed data (e.g backups)</li>
<li>REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)</li>
</ul>
<h3 id="anonymous-access-to-public-buckets">Anonymous access to public buckets</h3> <h3 id="anonymous-access-to-public-buckets">Anonymous access to public buckets</h3>
<p>If you want to use rclone to access a public bucket, configure with a blank <code>access_key_id</code> and <code>secret_access_key</code>. Eg</p> <p>If you want to use rclone to access a public bucket, configure with a blank <code>access_key_id</code> and <code>secret_access_key</code>. Eg</p>
<pre><code>No remotes found - make a new one <pre><code>No remotes found - make a new one
@ -1503,7 +1726,26 @@ y/e/d&gt; y</code></pre>
<pre><code>rclone ls remote:container</code></pre> <pre><code>rclone ls remote:container</code></pre>
<p>Sync <code>/home/local/directory</code> to the remote container, deleting any excess files in the container.</p> <p>Sync <code>/home/local/directory</code> to the remote container, deleting any excess files in the container.</p>
<pre><code>rclone sync /home/local/directory remote:container</code></pre> <pre><code>rclone sync /home/local/directory remote:container</code></pre>
<h3 id="specific-options-1">Specific options</h3> <h3 id="configuration-from-an-openstack-credentials-file">Configuration from an Openstack credentials file</h3>
<p>An Opentstack credentials file typically looks something something like this (without the comments)</p>
<pre><code>export OS_AUTH_URL=https://a.provider.net/v2.0
export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
export OS_TENANT_NAME=&quot;1234567890123456&quot;
export OS_USERNAME=&quot;123abc567xy&quot;
echo &quot;Please enter your OpenStack Password: &quot;
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_REGION_NAME=&quot;SBG1&quot;
if [ -z &quot;$OS_REGION_NAME&quot; ]; then unset OS_REGION_NAME; fi</code></pre>
<p>The config file needs to look something like this where <code>$OS_USERNAME</code> represents the value of the <code>OS_USERNAME</code> variable - <code>123abc567xy</code> in the example above.</p>
<pre><code>[remote]
type = swift
user = $OS_USERNAME
key = $OS_PASSWORD
auth = $OS_AUTH_URL
tenant = $OS_TENANT_NAME</code></pre>
<p>Note that you may (or may not) need to set <code>region</code> too - try without first.</p>
<h3 id="specific-options-2">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="swift-chunk-sizesize">--swift-chunk-size=SIZE</h4> <h4 id="swift-chunk-sizesize">--swift-chunk-size=SIZE</h4>
<p>Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.</p> <p>Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.</p>
@ -1516,6 +1758,7 @@ y/e/d&gt; y</code></pre>
<h4 id="rclone-gives-failed-to-create-file-system-for-remote-bad-request">Rclone gives Failed to create file system for &quot;remote:&quot;: Bad Request</h4> <h4 id="rclone-gives-failed-to-create-file-system-for-remote-bad-request">Rclone gives Failed to create file system for &quot;remote:&quot;: Bad Request</h4>
<p>Due to an oddity of the underlying swift library, it gives a &quot;Bad Request&quot; error rather than a more sensible error when the authentication fails for Swift.</p> <p>Due to an oddity of the underlying swift library, it gives a &quot;Bad Request&quot; error rather than a more sensible error when the authentication fails for Swift.</p>
<p>So this most likely means your username / password is wrong. You can investigate further with the <code>--dump-bodies</code> flag.</p> <p>So this most likely means your username / password is wrong. You can investigate further with the <code>--dump-bodies</code> flag.</p>
<p>This may also be caused by specifying the region when you shouldn't have (eg OVH).</p>
<h4 id="rclone-gives-failed-to-create-file-system-response-didnt-have-storage-storage-url-and-auth-token">Rclone gives Failed to create file system: Response didn't have storage storage url and auth token</h4> <h4 id="rclone-gives-failed-to-create-file-system-response-didnt-have-storage-storage-url-and-auth-token">Rclone gives Failed to create file system: Response didn't have storage storage url and auth token</h4>
<p>This is most likely caused by forgetting to specify your tenant when setting up a swift remote.</p> <p>This is most likely caused by forgetting to specify your tenant when setting up a swift remote.</p>
<h2 id="dropbox">Dropbox</h2> <h2 id="dropbox">Dropbox</h2>
@ -1589,7 +1832,7 @@ y/e/d&gt; y</code></pre>
</ul> </ul>
<p>Dropbox doesn't return any sort of checksum (MD5 or SHA1).</p> <p>Dropbox doesn't return any sort of checksum (MD5 or SHA1).</p>
<p>Together that means that syncs to dropbox will effectively have the <code>--size-only</code> flag set.</p> <p>Together that means that syncs to dropbox will effectively have the <code>--size-only</code> flag set.</p>
<h3 id="specific-options-2">Specific options</h3> <h3 id="specific-options-3">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="dropbox-chunk-sizesize">--dropbox-chunk-size=SIZE</h4> <h4 id="dropbox-chunk-sizesize">--dropbox-chunk-size=SIZE</h4>
<p>Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.</p> <p>Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.</p>
@ -1781,19 +2024,25 @@ y/e/d&gt; y</code></pre>
<p>It does store MD5SUMs so for a more accurate sync, you can use the <code>--checksum</code> flag.</p> <p>It does store MD5SUMs so for a more accurate sync, you can use the <code>--checksum</code> flag.</p>
<h3 id="deleting-files-1">Deleting files</h3> <h3 id="deleting-files-1">Deleting files</h3>
<p>Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website.</p> <p>Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website.</p>
<h3 id="specific-options-3">Specific options</h3> <h3 id="using-with-non-.com-amazon-accounts">Using with non <code>.com</code> Amazon accounts</h3>
<p>Let's say you usually use <code>amazon.co.uk</code>. When you authenticate with rclone it will take you to an <code>amazon.com</code> page to log in. Your <code>amazon.co.uk</code> email and password should work here just fine.</p>
<h3 id="specific-options-4">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4> <h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4>
<p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p> <p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p>
<p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p> <p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p>
<h4 id="acd-upload-wait-timetime">--acd-upload-wait-time=TIME</h4> <h4 id="acd-upload-wait-per-gbtime">--acd-upload-wait-per-gb=TIME</h4>
<p>Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This controls the time rclone waits - 2 minutes by default. You might want to increase the time if you are having problems with very big files. Upload with the <code>-v</code> flag for more info.</p> <p>Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.</p>
<p>The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.</p>
<p>You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.</p>
<p>These values were determined empirically by observing lots of uploads of big files for a range of file sizes.</p>
<p>Upload with the <code>-v</code> flag to see more info about what rclone is doing in this situation.</p>
<h3 id="limitations-4">Limitations</h3> <h3 id="limitations-4">Limitations</h3>
<p>Note that Amazon Drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p> <p>Note that Amazon Drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see <code>--retries</code> flag) which should hopefully work around this problem.</p> <p>Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see <code>--retries</code> flag) which should hopefully work around this problem.</p>
<p>Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.</p> <p>Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.</p>
<p>At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.</p> <p>At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.</p>
<p>Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use <code>--max-size=50GB</code> option to limit the maximum size of uploaded files.</p> <p>Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use <code>--max-size 50G</code> option to limit the maximum size of uploaded files.</p>
<h2 id="microsoft-one-drive">Microsoft One Drive</h2> <h2 id="microsoft-one-drive">Microsoft One Drive</h2>
<p>Paths are specified as <code>remote:path</code></p> <p>Paths are specified as <code>remote:path</code></p>
<p>Paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p> <p>Paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
@ -1870,7 +2119,7 @@ y/e/d&gt; y</code></pre>
<p>One drive supports SHA1 type hashes, so you can use <code>--checksum</code> flag.</p> <p>One drive supports SHA1 type hashes, so you can use <code>--checksum</code> flag.</p>
<h3 id="deleting-files-2">Deleting files</h3> <h3 id="deleting-files-2">Deleting files</h3>
<p>Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the One Drive website.</p> <p>Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the One Drive website.</p>
<h3 id="specific-options-4">Specific options</h3> <h3 id="specific-options-5">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="onedrive-chunk-sizesize">--onedrive-chunk-size=SIZE</h4> <h4 id="onedrive-chunk-sizesize">--onedrive-chunk-size=SIZE</h4>
<p>Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.</p> <p>Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.</p>
@ -2059,7 +2308,25 @@ $ rclone -q ls b2:cleanup-test
$ rclone -q --b2-versions ls b2:cleanup-test $ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt</code></pre> 9 one.txt</code></pre>
<h3 id="specific-options-5">Specific options</h3> <h3 id="data-usage">Data usage</h3>
<p>It is useful to know how many requests are sent to the server in different scenarios.</p>
<p>All copy commands send the following 4 requests:</p>
<pre><code>/b2api/v1/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names</code></pre>
<p>The <code>b2_list_file_names</code> request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. As of version 1.33 issue <a href="https://github.com/ncw/rclone/issues/818">#818</a> causes extra requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent.</p>
<p>Uploading files that do not require chunking, will send 2 requests per file upload:</p>
<pre><code>/b2api/v1/b2_get_upload_url
/b2api/v1/b2_upload_file/</code></pre>
<p>Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk:</p>
<pre><code>/b2api/v1/b2_start_large_file
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file</code></pre>
<h3 id="b2-with-crypt">B2 with crypt</h3>
<p>When using B2 with <code>crypt</code> files are encrypted into a temporary location and streamed from there. This is required to calculate the encrypted file's checksum before beginning the upload. On Windows the %TMPDIR% environment variable is used as the temporary location. If the file requires chunking, both the chunking and encryption will take place in memory.</p>
<h3 id="specific-options-6">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="b2-chunk-size-valueesize">--b2-chunk-size valuee=SIZE</h4> <h4 id="b2-chunk-size-valueesize">--b2-chunk-size valuee=SIZE</h4>
<p>When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of <code>--transfers</code> chunks in progress at once. 100,000,000 Bytes is the minimim size (default 96M).</p> <p>When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of <code>--transfers</code> chunks in progress at once. 100,000,000 Bytes is the minimim size (default 96M).</p>
@ -2203,6 +2470,8 @@ Choose a number from below, or type in your own value
\ &quot;yandex&quot; \ &quot;yandex&quot;
Storage&gt; 5 Storage&gt; 5
Remote to encrypt/decrypt. Remote to encrypt/decrypt.
Normally should contain a &#39;:&#39; and a path, eg &quot;myremote:path/to/dir&quot;,
&quot;myremote:bucket&quot; or &quot;myremote:&quot;
remote&gt; remote:path remote&gt; remote:path
How to encrypt the filenames. How to encrypt the filenames.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
@ -2240,8 +2509,8 @@ Remote config
[secret] [secret]
remote = remote:path remote = remote:path
filename_encryption = standard filename_encryption = standard
password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw password = *** ENCRYPTED ***
password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk password2 = *** ENCRYPTED ***
-------------------- --------------------
y) Yes this is OK y) Yes this is OK
e) Edit this remote e) Edit this remote
@ -2250,6 +2519,10 @@ y/e/d&gt; y</code></pre>
<p><strong>Important</strong> The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.</p> <p><strong>Important</strong> The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.</p>
<p>A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.</p> <p>A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.</p>
<p>Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing</p> <p>Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing</p>
<h2 id="specifying-the-remote">Specifying the remote</h2>
<p>In normal use, make sure the remote has a <code>:</code> in. If you specify the remote without a <code>:</code> then rclone will use a local directory of that name. So if you use a remote of <code>/path/to/secret/files</code> then rclone will encrypt stuff to that directory. If you use a remote of <code>name</code> then rclone will put files in a directory called <code>name</code> in the current directory.</p>
<p>If you specify the remote as <code>remote:path/to/dir</code> then rclone will store encrypted files in <code>path/to/dir</code> on the remote. If you are using file name encryption, then when you save files to <code>secret:subdir/subfile</code> this will store them in the unencrypted path <code>path/to/dir</code> but the <code>subdir/subpath</code> bit will be encrypted.</p>
<p>Note that unless you want encrypted bucket names (which are difficult to manage because you won't know what directory they represent in web interfaces etc), you should probably specify a bucket, eg <code>remote:secretbucket</code> when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.</p>
<h2 id="example">Example</h2> <h2 id="example">Example</h2>
<p>To test I made a little directory of files using &quot;standard&quot; file name encryption.</p> <p>To test I made a little directory of files using &quot;standard&quot; file name encryption.</p>
<pre><code>plaintext/ <pre><code>plaintext/
@ -2293,6 +2566,9 @@ $ rclone -q ls secret:
<p>Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion</p> <p>Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion</p>
<p>Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using &quot;Standard&quot; file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.</p> <p>Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using &quot;Standard&quot; file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.</p>
<p>There may be an even more secure file name encryption mode in the future which will address the long file name problem.</p> <p>There may be an even more secure file name encryption mode in the future which will address the long file name problem.</p>
<h3 id="modified-time-and-hashes-1">Modified time and hashes</h3>
<p>Crypt stores modification times using the underlying remote so support depends on that.</p>
<p>Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.</p>
<h2 id="file-formats">File formats</h2> <h2 id="file-formats">File formats</h2>
<h3 id="file-encryption">File encryption</h3> <h3 id="file-encryption">File encryption</h3>
<p>Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.</p> <p>Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.</p>
@ -2369,8 +2645,101 @@ nounc = true</code></pre>
<p>And use rclone like this:</p> <p>And use rclone like this:</p>
<p><code>rclone copy c:\src nounc:z:\dst</code></p> <p><code>rclone copy c:\src nounc:z:\dst</code></p>
<p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p> <p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p>
<h3 id="specific-options-7">Specific options</h3>
<p>Here are the command line options specific to local storage</p>
<h4 id="one-file-system--x">--one-file-system, -x</h4>
<p>This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.</p>
<p>For example if you have a directory heirachy like this</p>
<pre><code>root
├── disk1 - disk1 mounted on the root
│   └── file3 - stored on disk1
├── disk2 - disk2 mounted on the root
│   └── file4 - stored on disk12
├── file1 - stored on the root disk
└── file2 - stored on the root disk</code></pre>
<p>Using <code>rclone --one-file-system copy root remote:</code> will only copy <code>file1</code> and <code>file2</code>. Eg</p>
<pre><code>$ rclone -q --one-file-system ls root
0 file1
0 file2</code></pre>
<pre><code>$ rclone -q ls root
0 disk1/file3
0 disk2/file4
0 file1
0 file2</code></pre>
<p><strong>NB</strong> Rclone (like most unix tools such as <code>du</code>, <code>rsync</code> and <code>tar</code>) treats a bind mount to the same device as being on the same filesystem.</p>
<p><strong>NB</strong> This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag.</p>
<h2 id="changelog">Changelog</h2> <h2 id="changelog">Changelog</h2>
<ul> <ul>
<li>v1.34 - 2016-11-06
<ul>
<li>New Features</li>
<li>Stop single file and <code>--files-from</code> operations iterating through the source bucket.</li>
<li>Stop removing failed upload to cloud storage remotes</li>
<li>Make ContentType be preserved for cloud to cloud copies</li>
<li>Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini</li>
<li><code>rclone check</code> shows count of hashes that couldn't be checked</li>
<li><code>rclone listremotes</code> command</li>
<li>Support linux/arm64 build - thanks Fredrik Fornwall</li>
<li>Remove <code>Authorization:</code> lines from <code>--dump-headers</code> output</li>
<li>Bug Fixes</li>
<li>Ignore files with control characters in the names</li>
<li>Fix <code>rclone move</code> command
<ul>
<li>Delete src files which already existed in dst</li>
<li>Fix deletion of src file when dst file older</li>
</ul></li>
<li>Fix <code>rclone check</code> on crypted file systems</li>
<li>Make failed uploads not count as &quot;Transferred&quot;</li>
<li>Make sure high level retries show with <code>-q</code></li>
<li>Use a vendor directory with godep for repeatable builds</li>
<li><code>rclone mount</code> - FUSE</li>
<li>Implement FUSE mount options
<ul>
<li><code>--no-modtime</code>, <code>--debug-fuse</code>, <code>--read-only</code>, <code>--allow-non-empty</code>, <code>--allow-root</code>, <code>--allow-other</code></li>
<li><code>--default-permissions</code>, <code>--write-back-cache</code>, <code>--max-read-ahead</code>, <code>--umask</code>, <code>--uid</code>, <code>--gid</code></li>
</ul></li>
<li>Add <code>--dir-cache-time</code> to control caching of directory entries</li>
<li>Implement seek for files opened for read (useful for video players)
<ul>
<li>with <code>-no-seek</code> flag to disable</li>
</ul></li>
<li>Fix crash on 32 bit ARM (alignment of 64 bit counter)</li>
<li>...and many more internal fixes and improvements!</li>
<li>Crypt</li>
<li>Don't show encrypted password in configurator to stop confusion</li>
<li>Amazon Drive</li>
<li>New wait for upload option <code>--acd-upload-wait-per-gb</code>
<ul>
<li>upload timeouts scale by file size and can be disabled</li>
</ul></li>
<li>Add 502 Bad Gateway to list of errors we retry</li>
<li>Fix overwriting a file with a zero length file</li>
<li>Fix ACD file size warning limit - thanks Felix Bünemann</li>
<li>Local</li>
<li>Unix: implement <code>-x</code>/<code>--one-file-system</code> to stay on a single file system
<ul>
<li>thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana</li>
</ul></li>
<li>Windows: ignore the symlink bit on files</li>
<li>Windows: Ignore directory based junction points</li>
<li>B2</li>
<li>Make sure each upload has at least one upload slot - fixes strange upload stats</li>
<li>Fix uploads when using crypt</li>
<li>Fix download of large files (sha1 mismatch)</li>
<li>Return error when we try to create a bucket which someone else owns</li>
<li>Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur</li>
<li>S3</li>
<li>Command line and config file support for
<ul>
<li>Setting/overriding ACL - thanks Radek Senfeld</li>
<li>Setting storage class - thanks Asko Tamm</li>
</ul></li>
<li>Drive</li>
<li>Make exponential backoff work exactly as per Google specification</li>
<li>add <code>.epub</code>, <code>.odp</code> and <code>.tsv</code> as export formats.</li>
<li>Swift</li>
<li>Don't read metadata for directory marker objects</li>
</ul></li>
<li>v1.33 - 2016-08-24 <li>v1.33 - 2016-08-24
<ul> <ul>
<li>New Features</li> <li>New Features</li>
@ -3150,19 +3519,79 @@ h='&#108;&#x6f;&#x67;&#x69;&#x63;&#46;&#x63;&#122;';a='&#64;';n='&#114;&#x75;&#x
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>'); document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// --> // -->
</script><noscript>&#114;&#x75;&#x73;&#104;&#32;&#x61;&#116;&#32;&#108;&#x6f;&#x67;&#x69;&#x63;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#122;</noscript></li> </script><noscript>&#114;&#x75;&#x73;&#104;&#32;&#x61;&#116;&#32;&#108;&#x6f;&#x67;&#x69;&#x63;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#122;</noscript></li>
<li>Fredrik Fornwall <script type="text/javascript">
<!--
h='&#102;&#x6f;&#114;&#110;&#x77;&#x61;&#108;&#108;&#46;&#110;&#x65;&#116;';a='&#64;';n='&#102;&#114;&#x65;&#100;&#114;&#x69;&#x6b;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#102;&#114;&#x65;&#100;&#114;&#x69;&#x6b;&#32;&#x61;&#116;&#32;&#102;&#x6f;&#114;&#110;&#x77;&#x61;&#108;&#108;&#32;&#100;&#x6f;&#116;&#32;&#110;&#x65;&#116;</noscript></li>
<li>Asko Tamm <script type="text/javascript">
<!--
h='&#100;&#x65;&#x65;&#x6b;&#x69;&#116;&#46;&#110;&#x65;&#116;';a='&#64;';n='&#x61;&#x73;&#x6b;&#x6f;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x61;&#x73;&#x6b;&#x6f;&#32;&#x61;&#116;&#32;&#100;&#x65;&#x65;&#x6b;&#x69;&#116;&#32;&#100;&#x6f;&#116;&#32;&#110;&#x65;&#116;</noscript></li>
<li>xor-zz <script type="text/javascript">
<!--
h='&#x67;&#x73;&#116;&#x6f;&#x63;&#x63;&#x6f;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#120;&#x6f;&#114;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#120;&#x6f;&#114;&#32;&#x61;&#116;&#32;&#x67;&#x73;&#116;&#x6f;&#x63;&#x63;&#x6f;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Tomasz Mazur <script type="text/javascript">
<!--
h='&#x67;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#116;&#x6d;&#x61;&#122;&#x75;&#114;&#x39;&#48;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#116;&#x6d;&#x61;&#122;&#x75;&#114;&#x39;&#48;&#32;&#x61;&#116;&#32;&#x67;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Marco Paganini <script type="text/javascript">
<!--
h='&#112;&#x61;&#x67;&#x61;&#110;&#x69;&#110;&#x69;&#46;&#110;&#x65;&#116;';a='&#64;';n='&#112;&#x61;&#x67;&#x61;&#110;&#x69;&#110;&#x69;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#112;&#x61;&#x67;&#x61;&#110;&#x69;&#110;&#x69;&#32;&#x61;&#116;&#32;&#112;&#x61;&#x67;&#x61;&#110;&#x69;&#110;&#x69;&#32;&#100;&#x6f;&#116;&#32;&#110;&#x65;&#116;</noscript></li>
<li>Felix Bünemann <script type="text/javascript">
<!--
h='&#108;&#x6f;&#x75;&#x69;&#x73;&#46;&#x69;&#110;&#102;&#x6f;';a='&#64;';n='&#98;&#x75;&#x65;&#110;&#x65;&#x6d;&#x61;&#110;&#110;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#98;&#x75;&#x65;&#110;&#x65;&#x6d;&#x61;&#110;&#110;&#32;&#x61;&#116;&#32;&#108;&#x6f;&#x75;&#x69;&#x73;&#32;&#100;&#x6f;&#116;&#32;&#x69;&#110;&#102;&#x6f;</noscript></li>
<li>Durval Menezes <script type="text/javascript">
<!--
h='&#100;&#x75;&#114;&#118;&#x61;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#106;&#x6d;&#114;&#x63;&#108;&#x6f;&#110;&#x65;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#106;&#x6d;&#114;&#x63;&#108;&#x6f;&#110;&#x65;&#32;&#x61;&#116;&#32;&#100;&#x75;&#114;&#118;&#x61;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Luiz Carlos Rumbelsperger Viana <script type="text/javascript">
<!--
h='&#104;&#x6f;&#116;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#x6d;&#x61;&#120;&#100;&#x31;&#x33;&#x5f;&#108;&#x75;&#x69;&#122;&#x5f;&#x63;&#x61;&#114;&#108;&#x6f;&#x73;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x6d;&#x61;&#120;&#100;&#x31;&#x33;&#x5f;&#108;&#x75;&#x69;&#122;&#x5f;&#x63;&#x61;&#114;&#108;&#x6f;&#x73;&#32;&#x61;&#116;&#32;&#104;&#x6f;&#116;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
</ul> </ul>
<h2 id="contact-the-rclone-project">Contact the rclone project</h2> <h1 id="contact-the-rclone-project">Contact the rclone project</h1>
<h2 id="forum">Forum</h2>
<p>Forum for general discussions and questions:</p>
<ul>
<li>https://forum.rclone.org</li>
</ul>
<h2 id="gitub-project">Gitub project</h2>
<p>The project website is at:</p> <p>The project website is at:</p>
<ul> <ul>
<li>https://github.com/ncw/rclone</li> <li>https://github.com/ncw/rclone</li>
</ul> </ul>
<p>There you can file bug reports, ask for help or contribute pull requests.</p> <p>There you can file bug reports, ask for help or contribute pull requests.</p>
<p>See also</p> <h2 id="google">Google+</h2>
<p>Rclone has a Google+ page which announcements are posted to</p>
<ul> <ul>
<li><a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page for general comments</a> <li><a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page for general comments</a></li>
</li></li>
</ul> </ul>
<p>Or email <script type="text/javascript"> <h2 id="twitter">Twitter</h2>
<p>You can also follow me on twitter for rclone announcments</p>
<ul>
<li><span class="citation">[@njcw]</span>(https://twitter.com/njcw)</li>
</ul>
<h2 id="email">Email</h2>
<p>Or if all else fails or you want to ask something private or confidential email <script type="text/javascript">
<!-- <!--
h='&#x63;&#114;&#x61;&#x69;&#x67;&#x2d;&#x77;&#x6f;&#x6f;&#100;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#110;&#x69;&#x63;&#x6b;';e=n+a+h; h='&#x63;&#114;&#x61;&#x69;&#x67;&#x2d;&#x77;&#x6f;&#x6f;&#100;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#110;&#x69;&#x63;&#x6b;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+'&#78;&#x69;&#x63;&#x6b;&#32;&#x43;&#114;&#x61;&#x69;&#x67;&#x2d;&#x57;&#x6f;&#x6f;&#100;'+'<\/'+'a'+'>'); document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+'&#78;&#x69;&#x63;&#x6b;&#32;&#x43;&#114;&#x61;&#x69;&#x67;&#x2d;&#x57;&#x6f;&#x6f;&#100;'+'<\/'+'a'+'>');

611
MANUAL.md
View File

@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Aug 24, 2016 % Nov 06, 2016
Rclone Rclone
====== ======
@ -40,16 +40,73 @@ Links
* <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page</a></li> * <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page</a></li>
* [Downloads](http://rclone.org/downloads/) * [Downloads](http://rclone.org/downloads/)
Install # Install #
-------
Rclone is a Go program and comes as a single binary file. Rclone is a Go program and comes as a single binary file.
[Download](http://rclone.org/downloads/) the relevant binary. ## Quickstart ##
Or alternatively if you have Go 1.5+ installed use * [Download](http://rclone.org/downloads/) the relevant binary.
* Unpack and the `rclone` binary.
* Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details.
go get github.com/ncw/rclone See below for some expanded Linux / macOS instructions.
See the [Usage section](http://rclone.org/docs/) of the docs for how to use rclone, or
run `rclone -h`.
## Linux installation from precompiled binary ##
Fetch and unpack
curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip
unzip rclone-current-linux-amd64.zip
cd rclone-*-linux-amd64
Copy binary file
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
Install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb
Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details.
rclone config
## macOS installation from precompiled binary ##
Download the latest version of rclone.
cd && curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip
Unzip the download and cd to the extracted folder.
unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64
Move rclone to your $PATH. You will be prompted for your password.
sudo mv rclone /usr/local/bin/
Remove the leftover files.
cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details.
rclone config
## Install from source ##
Make sure you have at least [Go](https://golang.org/) 1.5 installed.
Make sure your `GOPATH` is set, then:
go get -u -v github.com/ncw/rclone
and this will build the binary in `$GOPATH/bin`. If you have built and this will build the binary in `$GOPATH/bin`. If you have built
rclone before then you will want to update its dependencies first with rclone before then you will want to update its dependencies first with
@ -57,25 +114,7 @@ this
go get -u -v github.com/ncw/rclone/... go get -u -v github.com/ncw/rclone/...
See the [Usage section](http://rclone.org/docs/) of the docs for how to use rclone, or ## Installation with Ansible ##
run `rclone -h`.
linux binary downloaded files install example
-------
unzip rclone-v1.17-linux-amd64.zip
cd rclone-v1.17-linux-amd64
#copy binary file
sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone
#install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb
Installation with Ansible
-------
This can be done with [Stefan Weichinger's ansible This can be done with [Stefan Weichinger's ansible
role](https://github.com/stefangweichinger/ansible-rclone). role](https://github.com/stefangweichinger/ansible-rclone).
@ -567,7 +606,7 @@ rclone dedupe [mode] remote:path
### Options ### Options
``` ```
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
``` ```
## rclone authorize ## rclone authorize
@ -657,6 +696,29 @@ rclone.org website.
rclone gendocs output_directory rclone gendocs output_directory
``` ```
## rclone listremotes
List all the remotes in the config file.
### Synopsis
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
```
rclone listremotes
```
### Options
```
-l, --long Show the type as well as names.
```
## rclone mount ## rclone mount
Mount the remote as a mountpoint. **EXPERIMENTAL** Mount the remote as a mountpoint. **EXPERIMENTAL**
@ -686,10 +748,9 @@ Or with OS X
### Limitations ### ### Limitations ###
This can only read files seqentially, or write files sequentially. It This can only write files seqentially, it can only seek when reading.
can't read and write or seek in files.
rclonefs inherits rclone's directory handling. In rclone's world Rclone mount inherits rclone's directory handling. In rclone's world
directories don't really exist. This means that empty directories directories don't really exist. This means that empty directories
will have a tendency to disappear once they fall out of the directory will have a tendency to disappear once they fall out of the directory
cache. cache.
@ -713,8 +774,9 @@ mount won't do that, so will be less reliable than the rclone command.
### Bugs ### ### Bugs ###
* All the remotes should work for read, but some may not for write * All the remotes should work for read, but some may not for write
* those which need to know the size in advance won't - eg B2 * those which need to know the size in advance won't - eg B2, Amazon Drive
* maybe should pass in size as -1 to mean work it out * maybe should pass in size as -1 to mean work it out
* Or put in an an upload cache to cache the files on disk first
### TODO ### ### TODO ###
@ -730,8 +792,20 @@ rclone mount remote:path /path/to/mountpoint
### Options ### Options
``` ```
--debug-fuse Debug the FUSE internals - needs -v. --allow-non-empty Allow mounting over a non-empty directory.
--no-modtime Don't read the modification time (can speed things up). --allow-other Allow access to other users.
--allow-root Allow access to root user.
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-modtime Don't read the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
``` ```
@ -854,11 +928,11 @@ For example to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M`
This only limits the bandwidth of the data transfer, it doesn't limit This only limits the bandwidth of the data transfer, it doesn't limit
the bandwith of the directory listings etc. the bandwith of the directory listings etc.
For Linux/Unix operating systems: rclone will toggle the bandwidth limiter on Note that the units are Bytes/s not Bits/s. Typically connections are
and off upon receipt of the SIGUSR2 signal. This feature allows the user to measured in Bits/s - to convert divide by 8. For example let's say
remove the bandwidth limitations of a long running rclone transfer during you have a 10 Mbit/s connection and you wish rclone to use half of it
off-peak hours, and to restore it back to the value specified with --bwlimit - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a `--bwlimit
again when needed. 0.625M` parameter for rclone.
### --checkers=N ### ### --checkers=N ###
@ -1199,6 +1273,13 @@ here which are used for testing. These start with remote name eg
Write CPU profile to file. This can be analysed with `go tool pprof`. Write CPU profile to file. This can be analysed with `go tool pprof`.
### --dump-auth ###
Dump HTTP headers - will contain sensitive info such as
`Authorization:` headers - use `--dump-headers` to dump without
`Authorization:` headers. Can be very verbose. Useful for debugging
only.
### --dump-bodies ### ### --dump-bodies ###
Dump HTTP headers and bodies - may contain sensitive info. Can be Dump HTTP headers and bodies - may contain sensitive info. Can be
@ -1211,8 +1292,11 @@ and exclude options are filtering on.
### --dump-headers ### ### --dump-headers ###
Dump HTTP headers - may contain sensitive info. Can be very verbose. Dump HTTP headers with `Authorization:` lines removed. May still
Useful for debugging only. contain sensitive info. Can be very verbose. Useful for debugging
only.
Use `--dump-auth` if you do want the `Authorization:` headers.
### --memprofile=FILE ### ### --memprofile=FILE ###
@ -1291,9 +1375,21 @@ If you use the `--log-file=FILE` option, rclone will redirect `Error`,
Exit Code Exit Code
--------- ---------
If any errors occurred during the command, rclone will set a non zero If any errors occurred during the command, rclone with an exit code of
exit code. This allows scripts to detect when rclone operations have `1`. This allows scripts to detect when rclone operations have failed.
failed.
During the startup phase rclone will exit immediately if an error is
detected in the configuration. There will always be a log message
immediately before exiting.
When rclone is running it will accumulate errors as it goes along, and
only exit with an non-zero exit code if (after retries) there were no
transfers with errors remaining. For every error counted there will
be a high priority log message (visibile with `-q`) showing the
message and which file caused the problem. A high priority message is
also shown when starting a retry so the user can see that any previous
error messages may not be valid after the retry. If rclone has done a
retry it will log a high priority message if the retry was successful.
# Configuring rclone on a remote / headless machine # # Configuring rclone on a remote / headless machine #
@ -1471,13 +1567,13 @@ Rclone keeps track of directories that could match any file patterns.
Eg if you add the include rule Eg if you add the include rule
\a\*.jpg /a/*.jpg
Rclone will synthesize the directory include rule Rclone will synthesize the directory include rule
\a\ /a/
If you put any rules which end in `\` then it will only match If you put any rules which end in `/` then it will only match
directories. directories.
Directory matches are **only** used to optimise directory access Directory matches are **only** used to optimise directory access
@ -1761,19 +1857,19 @@ show through.
Here is an overview of the major features of each cloud storage system. Here is an overview of the major features of each cloud storage system.
| Name | Hash | ModTime | Case Insensitive | Duplicate Files | | Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type |
| ---------------------- |:-------:|:-------:|:----------------:|:---------------:| | ---------------------- |:-------:|:-------:|:----------------:|:---------------:|:---------:|
| Google Drive | MD5 | Yes | No | Yes | | Google Drive | MD5 | Yes | No | Yes | R/W |
| Amazon S3 | MD5 | Yes | No | No | | Amazon S3 | MD5 | Yes | No | No | R/W |
| Openstack Swift | MD5 | Yes | No | No | | Openstack Swift | MD5 | Yes | No | No | R/W |
| Dropbox | - | No | Yes | No | | Dropbox | - | No | Yes | No | R |
| Google Cloud Storage | MD5 | Yes | No | No | | Google Cloud Storage | MD5 | Yes | No | No | R/W |
| Amazon Drive | MD5 | No | Yes | No | | Amazon Drive | MD5 | No | Yes | No | R |
| Microsoft One Drive | SHA1 | Yes | Yes | No | | Microsoft One Drive | SHA1 | Yes | Yes | No | R |
| Hubic | MD5 | Yes | No | No | | Hubic | MD5 | Yes | No | No | R/W |
| Backblaze B2 | SHA1 | Yes | No | No | | Backblaze B2 | SHA1 | Yes | No | No | R/W |
| Yandex Disk | MD5 | Yes | No | No | | Yandex Disk | MD5 | Yes | No | No | R/W |
| The local filesystem | All | Yes | Depends | No | | The local filesystem | All | Yes | Depends | No | - |
### Hash ### ### Hash ###
@ -1824,6 +1920,86 @@ objects with the same name.
This confuses rclone greatly when syncing - use the `rclone dedupe` This confuses rclone greatly when syncing - use the `rclone dedupe`
command to rename or remove duplicates. command to rename or remove duplicates.
### MIME Type ###
MIME types (also known as media types) classify types of documents
using a simple text classification, eg `text/html` or
`application/pdf`.
Some cloud storage systems support reading (`R`) the MIME type of
objects and some support writing (`W`) the MIME type of objects.
The MIME type can be important if you are serving files directly to
HTTP from the storage system.
If you are copying from a remote which supports reading (`R`) to a
remote which supports writing (`W`) then rclone will preserve the MIME
types. Otherwise they will be guessed from the extension, or the
remote itself may assign the MIME type.
## Optional Features ##
All the remotes support a basic set of features, but there are some
optional features supported by some remotes used to make some
operations more efficient.
| Name | Purge | Copy | Move | DirMove | CleanUp |
| ---------------------- |:-----:|:----:|:----:|:-------:|:-------:|
| Google Drive | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) |
| Amazon S3 | No | Yes | No | No | No |
| Openstack Swift | Yes † | Yes | No | No | No |
| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) |
| Google Cloud Storage | Yes | Yes | No | No | No |
| Amazon Drive | Yes | No | No [#721](https://github.com/ncw/rclone/issues/721) | No [#721](https://github.com/ncw/rclone/issues/721) | No [#575](https://github.com/ncw/rclone/issues/575) |
| Microsoft One Drive | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) |
| Hubic | Yes † | Yes | No | No | No |
| Backblaze B2 | No | No | No | No | Yes |
| Yandex Disk | Yes | No | No | No | No [#575](https://github.com/ncw/rclone/issues/575) |
| The local filesystem | Yes | No | Yes | Yes | No |
### Purge ###
This deletes a directory quicker than just deleting all the files in
the directory.
† Note Swift and Hubic implement this in order to delete directory
markers but they don't actually have a quicker way of deleting files
other than deleting them individually.
### Copy ###
Used when copying an object to and from the same remote. This known
as a server side copy so you can copy a file without downloading it
and uploading it again. It is used if you use `rclone copy` or
`rclone move` if the remote doesn't support `Move` directly.
If the server doesn't support `Copy` directly then for copy operations
the file is downloaded then re-uploaded.
### Move ###
Used when moving/renaming an object on the same remote. This is known
as a server side move of a file. This is used in `rclone move` if the
server doesn't support `DirMove`.
If the server isn't capable of `Move` then rclone simulates it with
`Copy` then delete. If the server doesn't support `Copy` then rclone
will download the file and re-upload it.
### DirMove ###
This is used to implement `rclone move` to move a directory if
possible. If it isn't then it will use `Move` on each file (which
falls back to `Copy` then download and upload - see `Move` section).
### CleanUp ###
This is used for emptying the trash for a remote by `rclone cleanup`.
If the server can't do `CleanUp` then `rclone cleanup` will return an
error.
Google Drive Google Drive
----------------------------------------- -----------------------------------------
@ -2003,8 +2179,10 @@ Here are the possible extensions with their corresponding mime types.
| csv | text/csv | Standard CSV format for Spreadsheets | | csv | text/csv | Standard CSV format for Spreadsheets |
| doc | application/msword | Micosoft Office Document | | doc | application/msword | Micosoft Office Document |
| docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document | | docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document |
| epub | application/epub+zip | E-book format |
| html | text/html | An HTML Document | | html | text/html | An HTML Document |
| jpg | image/jpeg | A JPEG Image File | | jpg | image/jpeg | A JPEG Image File |
| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation |
| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet | | ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
| odt | application/vnd.oasis.opendocument.text | Openoffice Document | | odt | application/vnd.oasis.opendocument.text | Openoffice Document |
@ -2013,6 +2191,7 @@ Here are the possible extensions with their corresponding mime types.
| pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint | | pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint |
| rtf | application/rtf | Rich Text Format | | rtf | application/rtf | Rich Text Format |
| svg | image/svg+xml | Scalable Vector Graphics Format | | svg | image/svg+xml | Scalable Vector Graphics Format |
| tsv | text/tab-separated-values | Standard TSV format for spreadsheets |
| txt | text/plain | Plain Text | | txt | text/plain | Plain Text |
| xls | application/vnd.ms-excel | Microsoft Office Spreadsheet | | xls | application/vnd.ms-excel | Microsoft Office Spreadsheet |
| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet | | xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet |
@ -2206,6 +2385,17 @@ Choose a number from below, or type in your own value
2 / AES256 2 / AES256
\ "AES256" \ "AES256"
server_side_encryption> server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
Remote config Remote config
-------------------- --------------------
[remote] [remote]
@ -2276,6 +2466,27 @@ credentials. In order of precedence:
If none of these option actually end up providing `rclone` with AWS If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see below). credentials then S3 interaction will be non-authenticated (see below).
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --s3-acl=STRING ####
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit the [canned ACL docs](http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl).
#### --s3-storage-class=STRING ####
Storage class to upload new objects with.
Available options include:
- STANDARD - default storage class
- STANDARD_IA - for less frequently accessed data (e.g backups)
- REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)
### Anonymous access to public buckets ### ### Anonymous access to public buckets ###
If you want to use rclone to access a public bucket, configure with a If you want to use rclone to access a public bucket, configure with a
@ -2532,6 +2743,38 @@ excess files in the container.
rclone sync /home/local/directory remote:container rclone sync /home/local/directory remote:container
### Configuration from an Openstack credentials file ###
An Opentstack credentials file typically looks something something
like this (without the comments)
```
export OS_AUTH_URL=https://a.provider.net/v2.0
export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
export OS_TENANT_NAME="1234567890123456"
export OS_USERNAME="123abc567xy"
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_REGION_NAME="SBG1"
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
```
The config file needs to look something like this where `$OS_USERNAME`
represents the value of the `OS_USERNAME` variable - `123abc567xy` in
the example above.
```
[remote]
type = swift
user = $OS_USERNAME
key = $OS_PASSWORD
auth = $OS_AUTH_URL
tenant = $OS_TENANT_NAME
```
Note that you may (or may not) need to set `region` too - try without first.
### Specific options ### ### Specific options ###
Here are the command line options specific to this cloud storage Here are the command line options specific to this cloud storage
@ -2568,6 +2811,9 @@ authentication fails for Swift.
So this most likely means your username / password is wrong. You can So this most likely means your username / password is wrong. You can
investigate further with the `--dump-bodies` flag. investigate further with the `--dump-bodies` flag.
This may also be caused by specifying the region when you shouldn't
have (eg OVH).
#### Rclone gives Failed to create file system: Response didn't have storage storage url and auth token #### #### Rclone gives Failed to create file system: Response didn't have storage storage url and auth token ####
This is most likely caused by forgetting to specify your tenant when This is most likely caused by forgetting to specify your tenant when
@ -2971,6 +3217,12 @@ don't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Amazon's apps or via trash, so you will have to do that with one of Amazon's apps or via
the Amazon Drive website. the Amazon Drive website.
### Using with non `.com` Amazon accounts ###
Let's say you usually use `amazon.co.uk`. When you authenticate with
rclone it will take you to an `amazon.com` page to log in. Your
`amazon.co.uk` email and password should work here just fine.
### Specific options ### ### Specific options ###
Here are the command line options specific to this cloud storage Here are the command line options specific to this cloud storage
@ -2987,13 +3239,27 @@ To download files above this threshold, rclone requests a `tempLink`
which downloads the file through a temporary URL directly from the which downloads the file through a temporary URL directly from the
underlying S3 storage. underlying S3 storage.
#### --acd-upload-wait-time=TIME #### #### --acd-upload-wait-per-gb=TIME ####
Sometimes Amazon Drive gives an error when a file has been fully Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This uploaded but the file appears anyway after a little while. This
controls the time rclone waits - 2 minutes by default. You might want happens sometimes for files over 1GB in size and nearly every time for
to increase the time if you are having problems with very big files. files bigger than 10GB. This parameter controls the time rclone waits
Upload with the `-v` flag for more info. for the file to appear.
The default value for this parameter is 3 minutes per GB, so by
default it will wait 3 minutes for every GB uploaded to see if the
file appears.
You can disable this feature by setting it to 0. This may cause
conflict errors as rclone retries the failed upload but the file will
most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads
of big files for a range of file sizes.
Upload with the `-v` flag to see more info about what rclone is doing
in this situation.
### Limitations ### ### Limitations ###
@ -3014,7 +3280,7 @@ This means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other because of file size, so it will retry the operation, as any other
failure. To avoid this problem, use `--max-size=50GB` option to limit failure. To avoid this problem, use `--max-size 50G` option to limit
the maximum size of uploaded files. the maximum size of uploaded files.
Microsoft One Drive Microsoft One Drive
@ -3463,6 +3729,53 @@ $ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt 9 one.txt
``` ```
### Data usage ###
It is useful to know how many requests are sent to the server in different scenarios.
All copy commands send the following 4 requests:
```
/b2api/v1/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
```
The `b2_list_file_names` request will be sent once for every 1k files
in the remote path, providing the checksum and modification time of
the listed files. As of version 1.33 issue
[#818](https://github.com/ncw/rclone/issues/818) causes extra requests
to be sent when using B2 with Crypt. When a copy operation does not
require any files to be uploaded, no more requests will be sent.
Uploading files that do not require chunking, will send 2 requests per
file upload:
```
/b2api/v1/b2_get_upload_url
/b2api/v1/b2_upload_file/
```
Uploading files requiring chunking, will send 2 requests (one each to
start and finish the upload) and another 2 requests for each chunk:
```
/b2api/v1/b2_start_large_file
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
```
### B2 with crypt ###
When using B2 with `crypt` files are encrypted into a temporary
location and streamed from there. This is required to calculate the
encrypted file's checksum before beginning the upload. On Windows the
%TMPDIR% environment variable is used as the temporary location. If
the file requires chunking, both the chunking and encryption will take
place in memory.
### Specific options ### ### Specific options ###
Here are the command line options specific to this cloud storage Here are the command line options specific to this cloud storage
@ -3692,6 +4005,8 @@ Choose a number from below, or type in your own value
\ "yandex" \ "yandex"
Storage> 5 Storage> 5
Remote to encrypt/decrypt. Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or "myremote:"
remote> remote:path remote> remote:path
How to encrypt the filenames. How to encrypt the filenames.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
@ -3729,8 +4044,8 @@ Remote config
[secret] [secret]
remote = remote:path remote = remote:path
filename_encryption = standard filename_encryption = standard
password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw password = *** ENCRYPTED ***
password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk password2 = *** ENCRYPTED ***
-------------------- --------------------
y) Yes this is OK y) Yes this is OK
e) Edit this remote e) Edit this remote
@ -3751,6 +4066,27 @@ Note that rclone does not encrypt
* file length - this can be calcuated within 16 bytes * file length - this can be calcuated within 16 bytes
* modification time - used for syncing * modification time - used for syncing
## Specifying the remote ##
In normal use, make sure the remote has a `:` in. If you specify the
remote without a `:` then rclone will use a local directory of that
name. So if you use a remote of `/path/to/secret/files` then rclone
will encrypt stuff to that directory. If you use a remote of `name`
then rclone will put files in a directory called `name` in the current
directory.
If you specify the remote as `remote:path/to/dir` then rclone will
store encrypted files in `path/to/dir` on the remote. If you are using
file name encryption, then when you save files to
`secret:subdir/subfile` this will store them in the unencrypted path
`path/to/dir` but the `subdir/subpath` bit will be encrypted.
Note that unless you want encrypted bucket names (which are difficult
to manage because you won't know what directory they represent in web
interfaces etc), you should probably specify a bucket, eg
`remote:secretbucket` when using bucket based remotes such as S3,
Swift, Hubic, B2, GCS.
## Example ## ## Example ##
To test I made a little directory of files using "standard" file name To test I made a little directory of files using "standard" file name
@ -3837,6 +4173,14 @@ characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the There may be an even more secure file name encryption mode in the
future which will address the long file name problem. future which will address the long file name problem.
### Modified time and hashes ###
Crypt stores modification times using the underlying remote so support
depends on that.
Hashes are not stored for crypt. However the data integrity is
protected by an extremely strong crypto authenticator.
## File formats ## ## File formats ##
### File encryption ### ### File encryption ###
@ -4008,9 +4352,110 @@ This will use UNC paths on `c:\src` but not on `z:\dst`.
Of course this will cause problems if the absolute path length of a Of course this will cause problems if the absolute path length of a
file exceeds 258 characters on z, so only use this option if you have to. file exceeds 258 characters on z, so only use this option if you have to.
### Specific options ###
Here are the command line options specific to local storage
#### --one-file-system, -x ####
This tells rclone to stay in the filesystem specified by the root and
not to recurse into different file systems.
For example if you have a directory heirachy like this
```
root
├── disk1 - disk1 mounted on the root
│   └── file3 - stored on disk1
├── disk2 - disk2 mounted on the root
│   └── file4 - stored on disk12
├── file1 - stored on the root disk
└── file2 - stored on the root disk
```
Using `rclone --one-file-system copy root remote:` will only copy `file1` and `file2`. Eg
```
$ rclone -q --one-file-system ls root
0 file1
0 file2
```
```
$ rclone -q ls root
0 disk1/file3
0 disk2/file4
0 file1
0 file2
```
**NB** Rclone (like most unix tools such as `du`, `rsync` and `tar`)
treats a bind mount to the same device as being on the same
filesystem.
**NB** This flag is only available on Unix based systems. On systems
where it isn't supported (eg Windows) it will not appear as an valid
flag.
Changelog Changelog
--------- ---------
* v1.34 - 2016-11-06
* New Features
* Stop single file and `--files-from` operations iterating through the source bucket.
* Stop removing failed upload to cloud storage remotes
* Make ContentType be preserved for cloud to cloud copies
* Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini
* `rclone check` shows count of hashes that couldn't be checked
* `rclone listremotes` command
* Support linux/arm64 build - thanks Fredrik Fornwall
* Remove `Authorization:` lines from `--dump-headers` output
* Bug Fixes
* Ignore files with control characters in the names
* Fix `rclone move` command
* Delete src files which already existed in dst
* Fix deletion of src file when dst file older
* Fix `rclone check` on crypted file systems
* Make failed uploads not count as "Transferred"
* Make sure high level retries show with `-q`
* Use a vendor directory with godep for repeatable builds
* `rclone mount` - FUSE
* Implement FUSE mount options
* `--no-modtime`, `--debug-fuse`, `--read-only`, `--allow-non-empty`, `--allow-root`, `--allow-other`
* `--default-permissions`, `--write-back-cache`, `--max-read-ahead`, `--umask`, `--uid`, `--gid`
* Add `--dir-cache-time` to control caching of directory entries
* Implement seek for files opened for read (useful for video players)
* with `-no-seek` flag to disable
* Fix crash on 32 bit ARM (alignment of 64 bit counter)
* ...and many more internal fixes and improvements!
* Crypt
* Don't show encrypted password in configurator to stop confusion
* Amazon Drive
* New wait for upload option `--acd-upload-wait-per-gb`
* upload timeouts scale by file size and can be disabled
* Add 502 Bad Gateway to list of errors we retry
* Fix overwriting a file with a zero length file
* Fix ACD file size warning limit - thanks Felix Bünemann
* Local
* Unix: implement `-x`/`--one-file-system` to stay on a single file system
* thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
* Windows: ignore the symlink bit on files
* Windows: Ignore directory based junction points
* B2
* Make sure each upload has at least one upload slot - fixes strange upload stats
* Fix uploads when using crypt
* Fix download of large files (sha1 mismatch)
* Return error when we try to create a bucket which someone else owns
* Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur
* S3
* Command line and config file support for
* Setting/overriding ACL - thanks Radek Senfeld
* Setting storage class - thanks Asko Tamm
* Drive
* Make exponential backoff work exactly as per Google specification
* add `.epub`, `.odp` and `.tsv` as export formats.
* Swift
* Don't read metadata for directory marker objects
* v1.33 - 2016-08-24 * v1.33 - 2016-08-24
* New Features * New Features
* Implement encryption * Implement encryption
@ -4665,9 +5110,24 @@ Contributors
* Stefan G. Weichinger <office@oops.co.at> * Stefan G. Weichinger <office@oops.co.at>
* Per Cederberg <cederberg@gmail.com> * Per Cederberg <cederberg@gmail.com>
* Radek Šenfeld <rush@logic.cz> * Radek Šenfeld <rush@logic.cz>
* Fredrik Fornwall <fredrik@fornwall.net>
* Asko Tamm <asko@deekit.net>
* xor-zz <xor@gstocco.com>
* Tomasz Mazur <tmazur90@gmail.com>
* Marco Paganini <paganini@paganini.net>
* Felix Bünemann <buenemann@louis.info>
* Durval Menezes <jmrclone@durval.com>
* Luiz Carlos Rumbelsperger Viana <maxd13_luiz_carlos@hotmail.com>
Contact the rclone project # Contact the rclone project #
--------------------------
## Forum ##
Forum for general discussions and questions:
* https://forum.rclone.org
## Gitub project ##
The project website is at: The project website is at:
@ -4676,9 +5136,20 @@ The project website is at:
There you can file bug reports, ask for help or contribute pull There you can file bug reports, ask for help or contribute pull
requests. requests.
See also ## Google+ ##
* <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page for general comments</a></li> Rclone has a Google+ page which announcements are posted to
Or email [Nick Craig-Wood](mailto:nick@craig-wood.com) * <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page for general comments</a>
## Twitter ##
You can also follow me on twitter for rclone announcments
* [@njcw](https://twitter.com/njcw)
## Email ##
Or if all else fails or you want to ask something private or
confidential email [Nick Craig-Wood](mailto:nick@craig-wood.com)

View File

@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Aug 24, 2016 Nov 06, 2016
@ -44,38 +44,85 @@ Links
- Downloads - Downloads
Install
INSTALL
Rclone is a Go program and comes as a single binary file. Rclone is a Go program and comes as a single binary file.
Download the relevant binary.
Or alternatively if you have Go 1.5+ installed use Quickstart
go get github.com/ncw/rclone - Download the relevant binary.
- Unpack and the rclone binary.
- Run rclone config to setup. See rclone config docs for more details.
and this will build the binary in $GOPATH/bin. If you have built rclone See below for some expanded Linux / macOS instructions.
before then you will want to update its dependencies first with this
go get -u -v github.com/ncw/rclone/...
See the Usage section of the docs for how to use rclone, or run See the Usage section of the docs for how to use rclone, or run
rclone -h. rclone -h.
linux binary downloaded files install example Linux installation from precompiled binary
Fetch and unpack
curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip
unzip rclone-current-linux-amd64.zip
cd rclone-*-linux-amd64
Copy binary file
unzip rclone-v1.17-linux-amd64.zip
cd rclone-v1.17-linux-amd64
#copy binary file
sudo cp rclone /usr/sbin/ sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone sudo chown root:root /usr/sbin/rclone
sudo chmod 755 /usr/sbin/rclone sudo chmod 755 /usr/sbin/rclone
#install manpage
Install manpage
sudo mkdir -p /usr/local/share/man/man1 sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/ sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb sudo mandb
Run rclone config to setup. See rclone config docs for more details.
rclone config
macOS installation from precompiled binary
Download the latest version of rclone.
cd && curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip
Unzip the download and cd to the extracted folder.
unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64
Move rclone to your $PATH. You will be prompted for your password.
sudo mv rclone /usr/local/bin/
Remove the leftover files.
cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
Run rclone config to setup. See rclone config docs for more details.
rclone config
Install from source
Make sure you have at least Go 1.5 installed. Make sure your GOPATH is
set, then:
go get -u -v github.com/ncw/rclone
and this will build the binary in $GOPATH/bin. If you have built rclone
before then you will want to update its dependencies first with this
go get -u -v github.com/ncw/rclone/...
Installation with Ansible Installation with Ansible
@ -514,7 +561,7 @@ Or
Options Options
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
rclone authorize rclone authorize
@ -588,6 +635,23 @@ rclone.org website.
rclone gendocs output_directory rclone gendocs output_directory
rclone listremotes
List all the remotes in the config file.
Synopsis
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
rclone listremotes
Options
-l, --long Show the type as well as names.
rclone mount rclone mount
Mount the remote as a mountpoint. EXPERIMENTAL Mount the remote as a mountpoint. EXPERIMENTAL
@ -616,10 +680,9 @@ Or with OS X
Limitations Limitations
This can only read files seqentially, or write files sequentially. It This can only write files seqentially, it can only seek when reading.
can't read and write or seek in files.
rclonefs inherits rclone's directory handling. In rclone's world Rclone mount inherits rclone's directory handling. In rclone's world
directories don't really exist. This means that empty directories will directories don't really exist. This means that empty directories will
have a tendency to disappear once they fall out of the directory cache. have a tendency to disappear once they fall out of the directory cache.
@ -642,8 +705,10 @@ that, so will be less reliable than the rclone command.
Bugs Bugs
- All the remotes should work for read, but some may not for write - All the remotes should work for read, but some may not for write
- those which need to know the size in advance won't - eg B2 - those which need to know the size in advance won't - eg B2,
Amazon Drive
- maybe should pass in size as -1 to mean work it out - maybe should pass in size as -1 to mean work it out
- Or put in an an upload cache to cache the files on disk first
TODO TODO
@ -655,8 +720,20 @@ TODO
Options Options
--debug-fuse Debug the FUSE internals - needs -v. --allow-non-empty Allow mounting over a non-empty directory.
--no-modtime Don't read the modification time (can speed things up). --allow-other Allow access to other users.
--allow-root Allow access to root user.
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-modtime Don't read the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
Copying single files Copying single files
@ -776,6 +853,12 @@ For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
This only limits the bandwidth of the data transfer, it doesn't limit This only limits the bandwidth of the data transfer, it doesn't limit
the bandwith of the directory listings etc. the bandwith of the directory listings etc.
Note that the units are Bytes/s not Bits/s. Typically connections are
measured in Bits/s - to convert divide by 8. For example let's say you
have a 10 Mbit/s connection and you wish rclone to use half of it - 5
Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
--checkers=N --checkers=N
The number of checkers to run in parallel. Checkers do the equality The number of checkers to run in parallel. Checkers do the equality
@ -1107,6 +1190,12 @@ which are used for testing. These start with remote name eg
Write CPU profile to file. This can be analysed with go tool pprof. Write CPU profile to file. This can be analysed with go tool pprof.
--dump-auth
Dump HTTP headers - will contain sensitive info such as Authorization:
headers - use --dump-headers to dump without Authorization: headers. Can
be very verbose. Useful for debugging only.
--dump-bodies --dump-bodies
Dump HTTP headers and bodies - may contain sensitive info. Can be very Dump HTTP headers and bodies - may contain sensitive info. Can be very
@ -1119,8 +1208,10 @@ exclude options are filtering on.
--dump-headers --dump-headers
Dump HTTP headers - may contain sensitive info. Can be very verbose. Dump HTTP headers with Authorization: lines removed. May still contain
Useful for debugging only. sensitive info. Can be very verbose. Useful for debugging only.
Use --dump-auth if you do want the Authorization: headers.
--memprofile=FILE --memprofile=FILE
@ -1199,9 +1290,21 @@ and Debug messages along with standard error to FILE.
Exit Code Exit Code
If any errors occurred during the command, rclone will set a non zero If any errors occurred during the command, rclone with an exit code of
exit code. This allows scripts to detect when rclone operations have 1. This allows scripts to detect when rclone operations have failed.
failed.
During the startup phase rclone will exit immediately if an error is
detected in the configuration. There will always be a log message
immediately before exiting.
When rclone is running it will accumulate errors as it goes along, and
only exit with an non-zero exit code if (after retries) there were no
transfers with errors remaining. For every error counted there will be a
high priority log message (visibile with -q) showing the message and
which file caused the problem. A high priority message is also shown
when starting a retry so the user can see that any previous error
messages may not be valid after the retry. If rclone has done a retry it
will log a high priority message if the retry was successful.
@ -1379,13 +1482,13 @@ Rclone keeps track of directories that could match any file patterns.
Eg if you add the include rule Eg if you add the include rule
\a\*.jpg /a/*.jpg
Rclone will synthesize the directory include rule Rclone will synthesize the directory include rule
\a\ /a/
If you put any rules which end in \ then it will only match directories. If you put any rules which end in / then it will only match directories.
Directory matches are ONLY used to optimise directory access patterns - Directory matches are ONLY used to optimise directory access patterns -
you must still match the files that you want to match. Directory matches you must still match the files that you want to match. Directory matches
@ -1671,19 +1774,19 @@ Features
Here is an overview of the major features of each cloud storage system. Here is an overview of the major features of each cloud storage system.
Name Hash ModTime Case Insensitive Duplicate Files Name Hash ModTime Case Insensitive Duplicate Files MIME Type
---------------------- ------ --------- ------------------ ----------------- ---------------------- ------ --------- ------------------ ----------------- -----------
Google Drive MD5 Yes No Yes Google Drive MD5 Yes No Yes R/W
Amazon S3 MD5 Yes No No Amazon S3 MD5 Yes No No R/W
Openstack Swift MD5 Yes No No Openstack Swift MD5 Yes No No R/W
Dropbox - No Yes No Dropbox - No Yes No R
Google Cloud Storage MD5 Yes No No Google Cloud Storage MD5 Yes No No R/W
Amazon Drive MD5 No Yes No Amazon Drive MD5 No Yes No R
Microsoft One Drive SHA1 Yes Yes No Microsoft One Drive SHA1 Yes Yes No R
Hubic MD5 Yes No No Hubic MD5 Yes No No R/W
Backblaze B2 SHA1 Yes No No Backblaze B2 SHA1 Yes No No R/W
Yandex Disk MD5 Yes No No Yandex Disk MD5 Yes No No R/W
The local filesystem All Yes Depends No The local filesystem All Yes Depends No -
Hash Hash
@ -1734,6 +1837,84 @@ objects with the same name.
This confuses rclone greatly when syncing - use the rclone dedupe This confuses rclone greatly when syncing - use the rclone dedupe
command to rename or remove duplicates. command to rename or remove duplicates.
MIME Type
MIME types (also known as media types) classify types of documents using
a simple text classification, eg text/html or application/pdf.
Some cloud storage systems support reading (R) the MIME type of objects
and some support writing (W) the MIME type of objects.
The MIME type can be important if you are serving files directly to HTTP
from the storage system.
If you are copying from a remote which supports reading (R) to a remote
which supports writing (W) then rclone will preserve the MIME types.
Otherwise they will be guessed from the extension, or the remote itself
may assign the MIME type.
Optional Features
All the remotes support a basic set of features, but there are some
optional features supported by some remotes used to make some operations
more efficient.
Name Purge Copy Move DirMove CleanUp
---------------------- ------- ------ --------- --------- ---------
Google Drive Yes Yes Yes Yes No #575
Amazon S3 No Yes No No No
Openstack Swift Yes † Yes No No No
Dropbox Yes Yes Yes Yes No #575
Google Cloud Storage Yes Yes No No No
Amazon Drive Yes No No #721 No #721 No #575
Microsoft One Drive Yes Yes No #197 No #197 No #575
Hubic Yes † Yes No No No
Backblaze B2 No No No No Yes
Yandex Disk Yes No No No No #575
The local filesystem Yes No Yes Yes No
Purge
This deletes a directory quicker than just deleting all the files in the
directory.
† Note Swift and Hubic implement this in order to delete directory
markers but they don't actually have a quicker way of deleting files
other than deleting them individually.
Copy
Used when copying an object to and from the same remote. This known as a
server side copy so you can copy a file without downloading it and
uploading it again. It is used if you use rclone copy or rclone move if
the remote doesn't support Move directly.
If the server doesn't support Copy directly then for copy operations the
file is downloaded then re-uploaded.
Move
Used when moving/renaming an object on the same remote. This is known as
a server side move of a file. This is used in rclone move if the server
doesn't support DirMove.
If the server isn't capable of Move then rclone simulates it with Copy
then delete. If the server doesn't support Copy then rclone will
download the file and re-upload it.
DirMove
This is used to implement rclone move to move a directory if possible.
If it isn't then it will use Move on each file (which falls back to Copy
then download and upload - see Move section).
CleanUp
This is used for emptying the trash for a remote by rclone cleanup.
If the server can't do CleanUp then rclone cleanup will return an error.
Google Drive Google Drive
@ -1923,12 +2104,20 @@ Here are the possible extensions with their corresponding mime types.
rdprocessing rdprocessing
ml.document ml.document
epub application/ E-book format
epub+zip
html text/html An HTML html text/html An HTML
Document Document
jpg image/jpeg A JPEG Image jpg image/jpeg A JPEG Image
File File
odp application/ Openoffice
vnd.oasis.op Presentation
endocument.p
resentation
ods application/ Openoffice ods application/ Openoffice
vnd.oasis.op Spreadsheet vnd.oasis.op Spreadsheet
endocument.s endocument.s
@ -1966,6 +2155,10 @@ Here are the possible extensions with their corresponding mime types.
Graphics Graphics
Format Format
tsv text/tab-sep Standard TSV
arated-value format for
s spreadsheets
txt text/plain Plain Text txt text/plain Plain Text
xls application/ Microsoft xls application/ Microsoft
@ -2170,6 +2363,17 @@ This will guide you through an interactive setup process.
2 / AES256 2 / AES256
\ "AES256" \ "AES256"
server_side_encryption> server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
Remote config Remote config
-------------------- --------------------
[remote] [remote]
@ -2240,6 +2444,27 @@ order of precedence:
If none of these option actually end up providing rclone with AWS If none of these option actually end up providing rclone with AWS
credentials then S3 interaction will be non-authenticated (see below). credentials then S3 interaction will be non-authenticated (see below).
Specific options
Here are the command line options specific to this cloud storage system.
--s3-acl=STRING
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit the canned ACL docs.
--s3-storage-class=STRING
Storage class to upload new objects with.
Available options include:
- STANDARD - default storage class
- STANDARD_IA - for less frequently accessed data (e.g backups)
- REDUCED_REDUNDANCY (only for noncritical, reproducible data, has
lower redundancy)
Anonymous access to public buckets Anonymous access to public buckets
If you want to use rclone to access a public bucket, configure with a If you want to use rclone to access a public bucket, configure with a
@ -2485,6 +2710,35 @@ files in the container.
rclone sync /home/local/directory remote:container rclone sync /home/local/directory remote:container
Configuration from an Openstack credentials file
An Opentstack credentials file typically looks something something like
this (without the comments)
export OS_AUTH_URL=https://a.provider.net/v2.0
export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
export OS_TENANT_NAME="1234567890123456"
export OS_USERNAME="123abc567xy"
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_REGION_NAME="SBG1"
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
The config file needs to look something like this where $OS_USERNAME
represents the value of the OS_USERNAME variable - 123abc567xy in the
example above.
[remote]
type = swift
user = $OS_USERNAME
key = $OS_PASSWORD
auth = $OS_AUTH_URL
tenant = $OS_TENANT_NAME
Note that you may (or may not) need to set region too - try without
first.
Specific options Specific options
Here are the command line options specific to this cloud storage system. Here are the command line options specific to this cloud storage system.
@ -2519,6 +2773,9 @@ fails for Swift.
So this most likely means your username / password is wrong. You can So this most likely means your username / password is wrong. You can
investigate further with the --dump-bodies flag. investigate further with the --dump-bodies flag.
This may also be caused by specifying the region when you shouldn't have
(eg OVH).
Rclone gives Failed to create file system: Response didn't have storage storage url and auth token Rclone gives Failed to create file system: Response didn't have storage storage url and auth token
This is most likely caused by forgetting to specify your tenant when This is most likely caused by forgetting to specify your tenant when
@ -2911,6 +3168,12 @@ provide an API to permanently delete files, nor to empty the trash, so
you will have to do that with one of Amazon's apps or via the Amazon you will have to do that with one of Amazon's apps or via the Amazon
Drive website. Drive website.
Using with non .com Amazon accounts
Let's say you usually use amazon.co.uk. When you authenticate with
rclone it will take you to an amazon.com page to log in. Your
amazon.co.uk email and password should work here just fine.
Specific options Specific options
Here are the command line options specific to this cloud storage system. Here are the command line options specific to this cloud storage system.
@ -2926,13 +3189,26 @@ To download files above this threshold, rclone requests a tempLink which
downloads the file through a temporary URL directly from the underlying downloads the file through a temporary URL directly from the underlying
S3 storage. S3 storage.
--acd-upload-wait-time=TIME --acd-upload-wait-per-gb=TIME
Sometimes Amazon Drive gives an error when a file has been fully Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This controls uploaded but the file appears anyway after a little while. This happens
the time rclone waits - 2 minutes by default. You might want to increase sometimes for files over 1GB in size and nearly every time for files
the time if you are having problems with very big files. Upload with the bigger than 10GB. This parameter controls the time rclone waits for the
-v flag for more info. file to appear.
The default value for this parameter is 3 minutes per GB, so by default
it will wait 3 minutes for every GB uploaded to see if the file appears.
You can disable this feature by setting it to 0. This may cause conflict
errors as rclone retries the failed upload but the file will most likely
appear correctly eventually.
These values were determined empirically by observing lots of uploads of
big files for a range of file sizes.
Upload with the -v flag to see more info about what rclone is doing in
this situation.
Limitations Limitations
@ -2953,7 +3229,7 @@ means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other because of file size, so it will retry the operation, as any other
failure. To avoid this problem, use --max-size=50GB option to limit the failure. To avoid this problem, use --max-size 50G option to limit the
maximum size of uploaded files. maximum size of uploaded files.
@ -3385,6 +3661,47 @@ Clean up all the old versions and show that they've gone.
$ rclone -q --b2-versions ls b2:cleanup-test $ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt 9 one.txt
Data usage
It is useful to know how many requests are sent to the server in
different scenarios.
All copy commands send the following 4 requests:
/b2api/v1/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
The b2_list_file_names request will be sent once for every 1k files in
the remote path, providing the checksum and modification time of the
listed files. As of version 1.33 issue #818 causes extra requests to be
sent when using B2 with Crypt. When a copy operation does not require
any files to be uploaded, no more requests will be sent.
Uploading files that do not require chunking, will send 2 requests per
file upload:
/b2api/v1/b2_get_upload_url
/b2api/v1/b2_upload_file/
Uploading files requiring chunking, will send 2 requests (one each to
start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_start_large_file
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
B2 with crypt
When using B2 with crypt files are encrypted into a temporary location
and streamed from there. This is required to calculate the encrypted
file's checksum before beginning the upload. On Windows the %TMPDIR%
environment variable is used as the temporary location. If the file
requires chunking, both the chunking and encryption will take place in
memory.
Specific options Specific options
Here are the command line options specific to this cloud storage system. Here are the command line options specific to this cloud storage system.
@ -3605,6 +3922,8 @@ differentiate it from the remote.
\ "yandex" \ "yandex"
Storage> 5 Storage> 5
Remote to encrypt/decrypt. Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or "myremote:"
remote> remote:path remote> remote:path
How to encrypt the filenames. How to encrypt the filenames.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
@ -3642,8 +3961,8 @@ differentiate it from the remote.
[secret] [secret]
remote = remote:path remote = remote:path
filename_encryption = standard filename_encryption = standard
password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw password = *** ENCRYPTED ***
password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk password2 = *** ENCRYPTED ***
-------------------- --------------------
y) Yes this is OK y) Yes this is OK
e) Edit this remote e) Edit this remote
@ -3663,6 +3982,27 @@ Note that rclone does not encrypt * file length - this can be calcuated
within 16 bytes * modification time - used for syncing within 16 bytes * modification time - used for syncing
Specifying the remote
In normal use, make sure the remote has a : in. If you specify the
remote without a : then rclone will use a local directory of that name.
So if you use a remote of /path/to/secret/files then rclone will encrypt
stuff to that directory. If you use a remote of name then rclone will
put files in a directory called name in the current directory.
If you specify the remote as remote:path/to/dir then rclone will store
encrypted files in path/to/dir on the remote. If you are using file name
encryption, then when you save files to secret:subdir/subfile this will
store them in the unencrypted path path/to/dir but the subdir/subpath
bit will be encrypted.
Note that unless you want encrypted bucket names (which are difficult to
manage because you won't know what directory they represent in web
interfaces etc), you should probably specify a bucket, eg
remote:secretbucket when using bucket based remotes such as S3, Swift,
Hubic, B2, GCS.
Example Example
To test I made a little directory of files using "standard" file name To test I made a little directory of files using "standard" file name
@ -3735,6 +4075,14 @@ length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future There may be an even more secure file name encryption mode in the future
which will address the long file name problem. which will address the long file name problem.
Modified time and hashes
Crypt stores modification times using the underlying remote so support
depends on that.
Hashes are not stored for crypt. However the data integrity is protected
by an extremely strong crypto authenticator.
File formats File formats
@ -3903,9 +4251,113 @@ This will use UNC paths on c:\src but not on z:\dst. Of course this will
cause problems if the absolute path length of a file exceeds 258 cause problems if the absolute path length of a file exceeds 258
characters on z, so only use this option if you have to. characters on z, so only use this option if you have to.
Specific options
Here are the command line options specific to local storage
--one-file-system, -x
This tells rclone to stay in the filesystem specified by the root and
not to recurse into different file systems.
For example if you have a directory heirachy like this
root
├── disk1 - disk1 mounted on the root
│   └── file3 - stored on disk1
├── disk2 - disk2 mounted on the root
│   └── file4 - stored on disk12
├── file1 - stored on the root disk
└── file2 - stored on the root disk
Using rclone --one-file-system copy root remote: will only copy file1
and file2. Eg
$ rclone -q --one-file-system ls root
0 file1
0 file2
$ rclone -q ls root
0 disk1/file3
0 disk2/file4
0 file1
0 file2
NB Rclone (like most unix tools such as du, rsync and tar) treats a bind
mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where
it isn't supported (eg Windows) it will not appear as an valid flag.
Changelog Changelog
- v1.34 - 2016-11-06
- New Features
- Stop single file and --files-from operations iterating through
the source bucket.
- Stop removing failed upload to cloud storage remotes
- Make ContentType be preserved for cloud to cloud copies
- Add support to toggle bandwidth limits via SIGUSR2 - thanks
Marco Paganini
- rclone check shows count of hashes that couldn't be checked
- rclone listremotes command
- Support linux/arm64 build - thanks Fredrik Fornwall
- Remove Authorization: lines from --dump-headers output
- Bug Fixes
- Ignore files with control characters in the names
- Fix rclone move command
- Delete src files which already existed in dst
- Fix deletion of src file when dst file older
- Fix rclone check on crypted file systems
- Make failed uploads not count as "Transferred"
- Make sure high level retries show with -q
- Use a vendor directory with godep for repeatable builds
- rclone mount - FUSE
- Implement FUSE mount options
- --no-modtime, --debug-fuse, --read-only, --allow-non-empty,
--allow-root, --allow-other
- --default-permissions, --write-back-cache, --max-read-ahead,
--umask, --uid, --gid
- Add --dir-cache-time to control caching of directory entries
- Implement seek for files opened for read (useful for
video players)
- with -no-seek flag to disable
- Fix crash on 32 bit ARM (alignment of 64 bit counter)
- ...and many more internal fixes and improvements!
- Crypt
- Don't show encrypted password in configurator to stop confusion
- Amazon Drive
- New wait for upload option --acd-upload-wait-per-gb
- upload timeouts scale by file size and can be disabled
- Add 502 Bad Gateway to list of errors we retry
- Fix overwriting a file with a zero length file
- Fix ACD file size warning limit - thanks Felix Bünemann
- Local
- Unix: implement -x/--one-file-system to stay on a single file
system
- thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
- Windows: ignore the symlink bit on files
- Windows: Ignore directory based junction points
- B2
- Make sure each upload has at least one upload slot - fixes
strange upload stats
- Fix uploads when using crypt
- Fix download of large files (sha1 mismatch)
- Return error when we try to create a bucket which someone else
owns
- Update B2 docs with Data usage, and Crypt section - thanks
Tomasz Mazur
- S3
- Command line and config file support for
- Setting/overriding ACL - thanks Radek Senfeld
- Setting storage class - thanks Asko Tamm
- Drive
- Make exponential backoff work exactly as per Google
specification
- add .epub, .odp and .tsv as export formats.
- Swift
- Don't read metadata for directory marker objects
- v1.33 - 2016-08-24 - v1.33 - 2016-08-24
- New Features - New Features
- Implement encryption - Implement encryption
@ -4610,9 +5062,28 @@ Contributors
- Stefan G. Weichinger office@oops.co.at - Stefan G. Weichinger office@oops.co.at
- Per Cederberg cederberg@gmail.com - Per Cederberg cederberg@gmail.com
- Radek Šenfeld rush@logic.cz - Radek Šenfeld rush@logic.cz
- Fredrik Fornwall fredrik@fornwall.net
- Asko Tamm asko@deekit.net
- xor-zz xor@gstocco.com
- Tomasz Mazur tmazur90@gmail.com
- Marco Paganini paganini@paganini.net
- Felix Bünemann buenemann@louis.info
- Durval Menezes jmrclone@durval.com
- Luiz Carlos Rumbelsperger Viana maxd13_luiz_carlos@hotmail.com
Contact the rclone project
CONTACT THE RCLONE PROJECT
Forum
Forum for general discussions and questions:
- https://forum.rclone.org
Gitub project
The project website is at: The project website is at:
@ -4621,8 +5092,22 @@ The project website is at:
There you can file bug reports, ask for help or contribute pull There you can file bug reports, ask for help or contribute pull
requests. requests.
See also
Google+
Rclone has a Google+ page which announcements are posted to
- Google+ page for general comments - Google+ page for general comments
Or email Nick Craig-Wood
Twitter
You can also follow me on twitter for rclone announcments
- [@njcw](https://twitter.com/njcw)
Email
Or if all else fails or you want to ask something private or
confidential email Nick Craig-Wood

View File

@ -44,3 +44,4 @@ Contributors
* Marco Paganini <paganini@paganini.net> * Marco Paganini <paganini@paganini.net>
* Felix Bünemann <buenemann@louis.info> * Felix Bünemann <buenemann@louis.info>
* Durval Menezes <jmrclone@durval.com> * Durval Menezes <jmrclone@durval.com>
* Luiz Carlos Rumbelsperger Viana <maxd13_luiz_carlos@hotmail.com>

View File

@ -1,12 +1,68 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Changelog" description: "Rclone Changelog"
date: "2016-08-24" date: "2016-11-06"
--- ---
Changelog Changelog
--------- ---------
* v1.34 - 2016-11-06
* New Features
* Stop single file and `--files-from` operations iterating through the source bucket.
* Stop removing failed upload to cloud storage remotes
* Make ContentType be preserved for cloud to cloud copies
* Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini
* `rclone check` shows count of hashes that couldn't be checked
* `rclone listremotes` command
* Support linux/arm64 build - thanks Fredrik Fornwall
* Remove `Authorization:` lines from `--dump-headers` output
* Bug Fixes
* Ignore files with control characters in the names
* Fix `rclone move` command
* Delete src files which already existed in dst
* Fix deletion of src file when dst file older
* Fix `rclone check` on crypted file systems
* Make failed uploads not count as "Transferred"
* Make sure high level retries show with `-q`
* Use a vendor directory with godep for repeatable builds
* `rclone mount` - FUSE
* Implement FUSE mount options
* `--no-modtime`, `--debug-fuse`, `--read-only`, `--allow-non-empty`, `--allow-root`, `--allow-other`
* `--default-permissions`, `--write-back-cache`, `--max-read-ahead`, `--umask`, `--uid`, `--gid`
* Add `--dir-cache-time` to control caching of directory entries
* Implement seek for files opened for read (useful for video players)
* with `-no-seek` flag to disable
* Fix crash on 32 bit ARM (alignment of 64 bit counter)
* ...and many more internal fixes and improvements!
* Crypt
* Don't show encrypted password in configurator to stop confusion
* Amazon Drive
* New wait for upload option `--acd-upload-wait-per-gb`
* upload timeouts scale by file size and can be disabled
* Add 502 Bad Gateway to list of errors we retry
* Fix overwriting a file with a zero length file
* Fix ACD file size warning limit - thanks Felix Bünemann
* Local
* Unix: implement `-x`/`--one-file-system` to stay on a single file system
* thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
* Windows: ignore the symlink bit on files
* Windows: Ignore directory based junction points
* B2
* Make sure each upload has at least one upload slot - fixes strange upload stats
* Fix uploads when using crypt
* Fix download of large files (sha1 mismatch)
* Return error when we try to create a bucket which someone else owns
* Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur
* S3
* Command line and config file support for
* Setting/overriding ACL - thanks Radek Senfeld
* Setting storage class - thanks Asko Tamm
* Drive
* Make exponential backoff work exactly as per Google specification
* add `.epub`, `.odp` and `.tsv` as export formats.
* Swift
* Don't read metadata for directory marker objects
* v1.33 - 2016-08-24 * v1.33 - 2016-08-24
* New Features * New Features
* Implement encryption * Implement encryption

View File

@ -1,12 +1,12 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone" title: "rclone"
slug: rclone slug: rclone
url: /commands/rclone/ url: /commands/rclone/
--- ---
## rclone ## rclone
Sync files and directories to and from local and remote object stores - v1.33-DEV Sync files and directories to and from local and remote object stores - v1.34-DEV
### Synopsis ### Synopsis
@ -50,69 +50,73 @@ rclone
### Options ### Options
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
-V, --version Print the version number --timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
-V, --version Print the version number
``` ```
### SEE ALSO ### SEE ALSO
@ -126,6 +130,7 @@ rclone
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path. * [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied. * [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file.
* [rclone ls](/commands/rclone_ls/) - List all the objects in the the path with size and path. * [rclone ls](/commands/rclone_ls/) - List all the objects in the the path with size and path.
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the the path. * [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the the path.
* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path. * [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path.
@ -140,4 +145,4 @@ rclone
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. * [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone version](/commands/rclone_version/) - Show the version number. * [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone authorize" title: "rclone authorize"
slug: rclone_authorize slug: rclone_authorize
url: /commands/rclone_authorize/ url: /commands/rclone_authorize/
@ -23,71 +23,75 @@ rclone authorize
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone cat" title: "rclone cat"
slug: rclone_cat slug: rclone_cat
url: /commands/rclone_cat/ url: /commands/rclone_cat/
@ -34,71 +34,75 @@ rclone cat remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone check" title: "rclone check"
slug: rclone_check slug: rclone_check
url: /commands/rclone_check/ url: /commands/rclone_check/
@ -26,71 +26,75 @@ rclone check source:path dest:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone cleanup" title: "rclone cleanup"
slug: rclone_cleanup slug: rclone_cleanup
url: /commands/rclone_cleanup/ url: /commands/rclone_cleanup/
@ -23,71 +23,75 @@ rclone cleanup remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone config" title: "rclone config"
slug: rclone_config slug: rclone_config
url: /commands/rclone_config/ url: /commands/rclone_config/
@ -20,71 +20,75 @@ rclone config
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone copy" title: "rclone copy"
slug: rclone_copy slug: rclone_copy
url: /commands/rclone_copy/ url: /commands/rclone_copy/
@ -59,71 +59,75 @@ rclone copy source:path dest:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone dedupe" title: "rclone dedupe"
slug: rclone_dedupe slug: rclone_dedupe
url: /commands/rclone_dedupe/ url: /commands/rclone_dedupe/
@ -95,77 +95,81 @@ rclone dedupe [mode] remote:path
### Options ### Options
``` ```
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone delete" title: "rclone delete"
slug: rclone_delete slug: rclone_delete
url: /commands/rclone_delete/ url: /commands/rclone_delete/
@ -37,71 +37,75 @@ rclone delete remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone genautocomplete" title: "rclone genautocomplete"
slug: rclone_genautocomplete slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/ url: /commands/rclone_genautocomplete/
@ -35,71 +35,75 @@ rclone genautocomplete [output_file]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone gendocs" title: "rclone gendocs"
slug: rclone_gendocs slug: rclone_gendocs
url: /commands/rclone_gendocs/ url: /commands/rclone_gendocs/
@ -23,71 +23,75 @@ rclone gendocs output_directory
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone ls" title: "rclone ls"
slug: rclone_ls slug: rclone_ls
url: /commands/rclone_ls/ url: /commands/rclone_ls/
@ -20,71 +20,75 @@ rclone ls remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone lsd" title: "rclone lsd"
slug: rclone_lsd slug: rclone_lsd
url: /commands/rclone_lsd/ url: /commands/rclone_lsd/
@ -20,71 +20,75 @@ rclone lsd remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone lsl" title: "rclone lsl"
slug: rclone_lsl slug: rclone_lsl
url: /commands/rclone_lsl/ url: /commands/rclone_lsl/
@ -20,71 +20,75 @@ rclone lsl remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone md5sum" title: "rclone md5sum"
slug: rclone_md5sum slug: rclone_md5sum
url: /commands/rclone_md5sum/ url: /commands/rclone_md5sum/
@ -23,71 +23,75 @@ rclone md5sum remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone mkdir" title: "rclone mkdir"
slug: rclone_mkdir slug: rclone_mkdir
url: /commands/rclone_mkdir/ url: /commands/rclone_mkdir/
@ -20,71 +20,75 @@ rclone mkdir remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone mount" title: "rclone mount"
slug: rclone_mount slug: rclone_mount
url: /commands/rclone_mount/ url: /commands/rclone_mount/
@ -33,10 +33,9 @@ Or with OS X
### Limitations ### ### Limitations ###
This can only read files seqentially, or write files sequentially. It This can only write files seqentially, it can only seek when reading.
can't read and write or seek in files.
rclonefs inherits rclone's directory handling. In rclone's world Rclone mount inherits rclone's directory handling. In rclone's world
directories don't really exist. This means that empty directories directories don't really exist. This means that empty directories
will have a tendency to disappear once they fall out of the directory will have a tendency to disappear once they fall out of the directory
cache. cache.
@ -60,8 +59,9 @@ mount won't do that, so will be less reliable than the rclone command.
### Bugs ### ### Bugs ###
* All the remotes should work for read, but some may not for write * All the remotes should work for read, but some may not for write
* those which need to know the size in advance won't - eg B2 * those which need to know the size in advance won't - eg B2, Amazon Drive
* maybe should pass in size as -1 to mean work it out * maybe should pass in size as -1 to mean work it out
* Or put in an an upload cache to cache the files on disk first
### TODO ### ### TODO ###
@ -77,78 +77,94 @@ rclone mount remote:path /path/to/mountpoint
### Options ### Options
``` ```
--debug-fuse Debug the FUSE internals - needs -v. --allow-non-empty Allow mounting over a non-empty directory.
--no-modtime Don't read the modification time (can speed things up). --allow-other Allow access to other users.
--allow-root Allow access to root user.
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-modtime Don't read the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone move" title: "rclone move"
slug: rclone_move slug: rclone_move
url: /commands/rclone_move/ url: /commands/rclone_move/
@ -36,71 +36,75 @@ rclone move source:path dest:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone purge" title: "rclone purge"
slug: rclone_purge slug: rclone_purge
url: /commands/rclone_purge/ url: /commands/rclone_purge/
@ -24,71 +24,75 @@ rclone purge remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone rmdir" title: "rclone rmdir"
slug: rclone_rmdir slug: rclone_rmdir
url: /commands/rclone_rmdir/ url: /commands/rclone_rmdir/
@ -22,71 +22,75 @@ rclone rmdir remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone sha1sum" title: "rclone sha1sum"
slug: rclone_sha1sum slug: rclone_sha1sum
url: /commands/rclone_sha1sum/ url: /commands/rclone_sha1sum/
@ -23,71 +23,75 @@ rclone sha1sum remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone size" title: "rclone size"
slug: rclone_size slug: rclone_size
url: /commands/rclone_size/ url: /commands/rclone_size/
@ -20,71 +20,75 @@ rclone size remote:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone sync" title: "rclone sync"
slug: rclone_sync slug: rclone_sync
url: /commands/rclone_sync/ url: /commands/rclone_sync/
@ -39,71 +39,75 @@ rclone sync source:path dest:path
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -1,5 +1,5 @@
--- ---
date: 2016-08-24T23:47:55+01:00 date: 2016-11-06T10:15:46Z
title: "rclone version" title: "rclone version"
slug: rclone_version slug: rclone_version
url: /commands/rclone_version/ url: /commands/rclone_version/
@ -20,71 +20,75 @@ rclone version
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s) --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings. --b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8) --checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size -c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf") --config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s) --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file --cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering --delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering --delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default) --delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync --delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently. --drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes -n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-auth Dump HTTP headers with auth info
--dump-filters Dump the filters to the output --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info --dump-filters Dump the filters to the output
--exclude string Exclude files matching pattern --dump-headers Dump HTTP headers - may contain sensitive info
--exclude-from string Read exclude patterns from file --exclude string Exclude files matching pattern
--files-from string Read list of source-file names from file --exclude-from string Read exclude patterns from file
-f, --filter string Add a file-filtering rule --files-from string Read list of source-file names from file
--filter-from string Read filtering patterns from a file -f, --filter string Add a file-filtering rule
--ignore-existing Skip all files that exist on destination --filter-from string Read filtering patterns from a file
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-existing Skip all files that exist on destination
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-size Ignore size when skipping use mod-time or checksum.
--include string Include files matching pattern -I, --ignore-times Don't skip files that match size and time - transfer all files
--include-from string Read include patterns from file --include string Include files matching pattern
--log-file string Log everything to this file --include-from string Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --log-file string Log everything to this file
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y --low-level-retries int Number of low level retries to do. (default 10)
--max-depth int If set limits the recursion depth to this. (default -1) --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G --max-depth int If set limits the recursion depth to this. (default -1)
--memprofile string Write memory profile to file --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y --memprofile string Write memory profile to file
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--modify-window duration Max time diff to be considered the same (default 1ns) --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --modify-window duration Max time diff to be considered the same (default 1ns)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-traverse Don't traverse destination file system on copy. --no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-update-modtime Don't update destination mod-time if files identical. --no-traverse Don't traverse destination file system on copy.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB -x, --one-file-system Don't cross filesystem boundaries.
-q, --quiet Print as little stuff as possible --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--retries int Retry operations this many times if they fail (default 3) --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
--size-only Skip based on size only, not mod-time or checksum -q, --quiet Print as little stuff as possible
--stats duration Interval to print stats (0 to disable) (default 1m0s) --retries int Retry operations this many times if they fail (default 3)
--swift-chunk-size int Above this size files will be chunked into a _segments container. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--timeout duration IO idle timeout (default 5m0s) --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--transfers int Number of file transfers to run in parallel. (default 4) --size-only Skip based on size only, not mod-time or checksum
-u, --update Skip files that are newer on the destination. --stats duration Interval to print stats (0 to disable) (default 1m0s)
-v, --verbose Print lots more stuff --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.34-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016 ###### Auto generated by spf13/cobra on 6-Nov-2016

View File

@ -2,40 +2,43 @@
title: "Rclone downloads" title: "Rclone downloads"
description: "Download rclone binaries for your OS." description: "Download rclone binaries for your OS."
type: page type: page
date: "2016-08-24" date: "2016-11-06"
--- ---
Rclone Download v1.33 Rclone Download v1.34
===================== =====================
* Windows * Windows
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-windows-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-windows-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-windows-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-windows-amd64.zip)
* OSX * OSX
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-osx-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-osx-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-osx-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-osx-amd64.zip)
* Linux * Linux
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-linux-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-linux-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-linux-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-linux-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.33-linux-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.34-linux-arm.zip)
* [ARM - 64 Bit](http://downloads.rclone.org/rclone-v1.34-linux-arm64.zip)
* FreeBSD * FreeBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-freebsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-freebsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-freebsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-freebsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.33-freebsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.34-freebsd-arm.zip)
* NetBSD * NetBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-netbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-netbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-netbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-netbsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.33-netbsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.34-netbsd-arm.zip)
* OpenBSD * OpenBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-openbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-openbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-openbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-openbsd-amd64.zip)
* Plan 9 * Plan 9
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-plan9-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.34-plan9-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-plan9-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-plan9-amd64.zip)
* Solaris * Solaris
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-solaris-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.34-solaris-amd64.zip)
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.33). You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.34).
You can also download [the releases using SSL](https://downloads-rclone-org-7d7d567e.cdn.memsites.com/).
Beta releases Beta releases
============= =============

View File

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
var Version = "v1.33-DEV" var Version = "v1.34-DEV"

914
rclone.1

File diff suppressed because it is too large Load Diff