Version v1.33

This commit is contained in:
Nick Craig-Wood 2016-08-24 22:58:24 +01:00
parent c2599cb116
commit 3996bbb8cb
30 changed files with 3383 additions and 1450 deletions

View File

@ -12,7 +12,7 @@
<div id="header">
<h1 class="title">rclone(1) User Manual</h1>
<h2 class="author">Nick Craig-Wood</h2>
<h3 class="date">Aug 04, 2016</h3>
<h3 class="date">Aug 24, 2016</h3>
</div>
<h1 id="rclone">Rclone</h1>
<p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p>
@ -283,15 +283,26 @@ two-3.txt: renamed from: two.txt</code></pre>
<pre><code>rclone dedupe rename &quot;drive:Google Photos&quot;</code></pre>
<pre><code>rclone dedupe [mode] remote:path</code></pre>
<h3 id="options">Options</h3>
<pre><code> --dedupe-mode value Dedupe mode interactive|skip|first|newest|oldest|rename. (default &quot;interactive&quot;)</code></pre>
<pre><code> --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.</code></pre>
<h2 id="rclone-authorize">rclone authorize</h2>
<p>Remote authorization.</p>
<h3 id="synopsis-18">Synopsis</h3>
<p>Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.</p>
<pre><code>rclone authorize</code></pre>
<h2 id="rclone-cat">rclone cat</h2>
<p>Concatenates any files and sends them to stdout.</p>
<h3 id="synopsis-19">Synopsis</h3>
<p>rclone cat sends any files to standard output.</p>
<p>You can use it like this to output a single file</p>
<pre><code>rclone cat remote:path/to/file</code></pre>
<p>Or like this to output any file in dir or subdirectories.</p>
<pre><code>rclone cat remote:path/to/dir</code></pre>
<p>Or like this to output any .txt files in dir or subdirectories.</p>
<pre><code>rclone --include &quot;*.txt&quot; cat remote:path/to/dir</code></pre>
<pre><code>rclone cat remote:path</code></pre>
<h2 id="rclone-genautocomplete">rclone genautocomplete</h2>
<p>Output bash completion script for rclone.</p>
<h3 id="synopsis-19">Synopsis</h3>
<h3 id="synopsis-20">Synopsis</h3>
<p>Generates a bash shell autocompletion script for rclone.</p>
<p>This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg</p>
<pre><code>sudo rclone genautocomplete</code></pre>
@ -301,9 +312,46 @@ two-3.txt: renamed from: two.txt</code></pre>
<pre><code>rclone genautocomplete [output_file]</code></pre>
<h2 id="rclone-gendocs">rclone gendocs</h2>
<p>Output markdown docs for rclone to the directory supplied.</p>
<h3 id="synopsis-20">Synopsis</h3>
<h3 id="synopsis-21">Synopsis</h3>
<p>This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.</p>
<pre><code>rclone gendocs output_directory</code></pre>
<h2 id="rclone-mount">rclone mount</h2>
<p>Mount the remote as a mountpoint. <strong>EXPERIMENTAL</strong></p>
<h3 id="synopsis-22">Synopsis</h3>
<p>rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.</p>
<p>This is <strong>EXPERIMENTAL</strong> - use with care.</p>
<p>First set up your remote using <code>rclone config</code>. Check it works with <code>rclone ls</code> etc.</p>
<p>Start the mount like this</p>
<pre><code>rclone mount remote:path/to/files /path/to/local/mount &amp;</code></pre>
<p>Stop the mount with</p>
<pre><code>fusermount -u /path/to/local/mount</code></pre>
<p>Or with OS X</p>
<pre><code>umount -u /path/to/local/mount</code></pre>
<h3 id="limitations">Limitations</h3>
<p>This can only read files seqentially, or write files sequentially. It can't read and write or seek in files.</p>
<p>rclonefs inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.</p>
<p>The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So <code>swift:</code> won't work whereas <code>swift:bucket</code> will as will <code>swift:bucket/path</code>.</p>
<p>Only supported on Linux, FreeBSD and OS X at the moment.</p>
<h3 id="rclone-mount-vs-rclone-synccopy">rclone mount vs rclone sync/copy</h3>
<p>File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.</p>
<h3 id="bugs">Bugs</h3>
<ul>
<li>All the remotes should work for read, but some may not for write
<ul>
<li>those which need to know the size in advance won't - eg B2</li>
<li>maybe should pass in size as -1 to mean work it out</li>
</ul></li>
</ul>
<h3 id="todo">TODO</h3>
<ul>
<li>Check hashes on upload/download</li>
<li>Preserve timestamps</li>
<li>Move directories</li>
</ul>
<pre><code>rclone mount remote:path /path/to/mountpoint</code></pre>
<h3 id="options-1">Options</h3>
<pre><code> --debug-fuse Debug the FUSE internals - needs -v.
--no-modtime Don&#39;t read the modification time (can speed things up).</code></pre>
<h2 id="copying-single-files">Copying single files</h2>
<p>rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error <code>Failed to create file system for &quot;remote:file&quot;: is a file not a directory</code> if it isn't.</p>
<p>For example, suppose you have a remote with a file in called <code>test.jpg</code>, then you could copy just that file like this</p>
@ -340,7 +388,7 @@ two-3.txt: renamed from: two.txt</code></pre>
<p>This can be used when scripting to make aged backups efficiently, eg</p>
<pre><code>rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backup</code></pre>
<h2 id="options-1">Options</h2>
<h2 id="options-2">Options</h2>
<p>Rclone has a number of options to control its behaviour.</p>
<p>Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as &quot;300ms&quot;, &quot;-1.5h&quot; or &quot;2h45m&quot;. Valid time units are &quot;ns&quot;, &quot;us&quot; (or &quot;µs&quot;), &quot;ms&quot;, &quot;s&quot;, &quot;m&quot;, &quot;h&quot;.</p>
<p>Options which use SIZE use kByte by default. However a suffix of <code>b</code> for bytes, <code>k</code> for kBytes, <code>M</code> for MBytes and <code>G</code> for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.</p>
@ -1073,8 +1121,21 @@ y/e/d&gt; y</code></pre>
</tr>
</tbody>
</table>
<h3 id="limitations">Limitations</h3>
<h3 id="limitations-1">Limitations</h3>
<p>Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.</p>
<h3 id="making-your-own-client_id">Making your own client_id</h3>
<p>When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.</p>
<p>However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.</p>
<p>Here is how to create your own Google Drive client ID for rclone:</p>
<ol style="list-style-type: decimal">
<li><p>Log into the <a href="https://console.developers.google.com/">Google API Console</a> with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)</p></li>
<li><p>Select a project or create a new project.</p></li>
<li><p>Under Overview, Google APIs, Google Apps APIs, click &quot;Drive API&quot;, then &quot;Enable&quot;.</p></li>
<li><p>Click &quot;Credentials&quot; in the left-side panel (not &quot;Go to credentials&quot;, which opens the wizard), then &quot;Create credentials&quot;, then &quot;OAuth client ID&quot;. It will prompt you to set the OAuth consent screen product name, if you haven't set one already.</p></li>
<li><p>Choose an application type of &quot;other&quot;, and click &quot;Create&quot;. (the default name is fine)</p></li>
<li><p>It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.</p></li>
</ol>
<p>(Thanks to <span class="citation">@balazer</span> on github for these instructions.)</p>
<h2 id="amazon-s3">Amazon S3</h2>
<p>Paths are specified as <code>remote:bucket</code> (or <code>remote:</code> for the <code>lsd</code> command.) You may put subdirectories in too, eg <code>remote:bucket/path/to/dir</code>.</p>
<p>Here is an example of making an s3 configuration. First run</p>
@ -1184,6 +1245,25 @@ Choose a number from below, or type in your own value
9 / South America (Sao Paulo) Region.
\ &quot;sa-east-1&quot;
location_constraint&gt; 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ &quot;private&quot;
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ &quot;public-read&quot;
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ &quot;public-read-write&quot;
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ &quot;authenticated-read&quot;
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ &quot;bucket-owner-read&quot;
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ &quot;bucket-owner-full-control&quot;
acl&gt; private
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
@ -1427,7 +1507,7 @@ y/e/d&gt; y</code></pre>
<h3 id="modified-time-2">Modified time</h3>
<p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p>
<p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p>
<h3 id="limitations-1">Limitations</h3>
<h3 id="limitations-2">Limitations</h3>
<p>The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.</p>
<h3 id="troubleshooting">Troubleshooting</h3>
<h4 id="rclone-gives-failed-to-create-file-system-for-remote-bad-request">Rclone gives Failed to create file system for &quot;remote:&quot;: Bad Request</h4>
@ -1510,7 +1590,7 @@ y/e/d&gt; y</code></pre>
<p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="dropbox-chunk-sizesize">--dropbox-chunk-size=SIZE</h4>
<p>Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.</p>
<h3 id="limitations-2">Limitations</h3>
<h3 id="limitations-3">Limitations</h3>
<p>Note that Dropbox is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>There are some file names such as <code>thumbs.db</code> which Dropbox can't store. There is a full list of them in the <a href="https://www.dropbox.com/en/help/145">&quot;Ignored Files&quot; section of this document</a>. Rclone will issue an error message <code>File name disallowed - not uploading</code> if it attempt to upload one of those file names, but the sync won't fail.</p>
<p>If you have more than 10,000 files in a directory then <code>rclone purge dropbox:dir</code> will return the error <code>Failed to purge: There are too many files involved in this operation</code>. As a work-around do an <code>rclone delete dropbix:dir</code> followed by an <code>rclone rmdir dropbox:dir</code>.</p>
@ -1703,7 +1783,9 @@ y/e/d&gt; y</code></pre>
<h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4>
<p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p>
<p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p>
<h3 id="limitations-3">Limitations</h3>
<h4 id="acd-upload-wait-timetime">--acd-upload-wait-time=TIME</h4>
<p>Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This controls the time rclone waits - 2 minutes by default. You might want to increase the time if you are having problems with very big files. Upload with the <code>-v</code> flag for more info.</p>
<h3 id="limitations-4">Limitations</h3>
<p>Note that Amazon Drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see <code>--retries</code> flag) which should hopefully work around this problem.</p>
<p>Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.</p>
@ -1791,7 +1873,7 @@ y/e/d&gt; y</code></pre>
<p>Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.</p>
<h4 id="onedrive-upload-cutoffsize">--onedrive-upload-cutoff=SIZE</h4>
<p>Cutoff for switching to chunked upload - must be &lt;= 100MB. The default is 10MB.</p>
<h3 id="limitations-4">Limitations</h3>
<h3 id="limitations-5">Limitations</h3>
<p>Note that One Drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!</p>
<p>There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a <code>?</code> in it will be mapped to <code></code> instead.</p>
@ -1871,7 +1953,7 @@ y/e/d&gt; y</code></pre>
<p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p>
<p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p>
<p>Note that Hubic wraps the Swift backend, so most of the properties of are the same.</p>
<h3 id="limitations-5">Limitations</h3>
<h3 id="limitations-6">Limitations</h3>
<p>This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.</p>
<p>The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.</p>
<h2 id="backblaze-b2">Backblaze B2</h2>
@ -2079,6 +2161,186 @@ y/e/d&gt; y</code></pre>
<p>Modified times are supported and are stored accurate to 1 ns in custom metadata called <code>rclone_modified</code> in RFC3339 with nanoseconds format.</p>
<h3 id="md5-checksums">MD5 checksums</h3>
<p>MD5 checksums are natively supported by Yandex Disk.</p>
<h2 id="crypt">Crypt</h2>
<p>The <code>crypt</code> remote encrypts and decrypts another remote.</p>
<p>To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.</p>
<p>First check your chosen remote is working - we'll call it <code>remote:path</code> in these docs. Note that anything inside <code>remote:path</code> will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote <code>s3:bucket</code>. If you just use <code>s3:</code> then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.</p>
<p>Now configure <code>crypt</code> using <code>rclone config</code>. We will call this one <code>secret</code> to differentiate it from the <code>remote</code>.</p>
<pre><code>No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q&gt; n
name&gt; secret
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ &quot;amazon cloud drive&quot;
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ &quot;s3&quot;
3 / Backblaze B2
\ &quot;b2&quot;
4 / Dropbox
\ &quot;dropbox&quot;
5 / Encrypt/Decrypt a remote
\ &quot;crypt&quot;
6 / Google Cloud Storage (this is not Google Drive)
\ &quot;google cloud storage&quot;
7 / Google Drive
\ &quot;drive&quot;
8 / Hubic
\ &quot;hubic&quot;
9 / Local Disk
\ &quot;local&quot;
10 / Microsoft OneDrive
\ &quot;onedrive&quot;
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ &quot;swift&quot;
12 / Yandex Disk
\ &quot;yandex&quot;
Storage&gt; 5
Remote to encrypt/decrypt.
remote&gt; remote:path
How to encrypt the filenames.
Choose a number from below, or type in your own value
1 / Don&#39;t encrypt the file names. Adds a &quot;.bin&quot; extension only.
\ &quot;off&quot;
2 / Encrypt the filenames see the docs for the details.
\ &quot;standard&quot;
filename_encryption&gt; 2
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g&gt; y
Enter the password:
password:
Confirm the password:
password:
Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n&gt; g
Password strength in bits.
64 is just about memorable
128 is secure
1024 is the maximum
Bits&gt; 128
Your password is: JAsJvRcgR-_veXNfy_sGmQ
Use this password?
y) Yes
n) No
y/n&gt; y
Remote config
--------------------
[secret]
remote = remote:path
filename_encryption = standard
password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw
password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&gt; y</code></pre>
<p><strong>Important</strong> The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.</p>
<p>A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.</p>
<p>Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing</p>
<h2 id="example">Example</h2>
<p>To test I made a little directory of files using &quot;standard&quot; file name encryption.</p>
<pre><code>plaintext/
├── file0.txt
├── file1.txt
└── subdir
├── file2.txt
├── file3.txt
└── subsubdir
└── file4.txt</code></pre>
<p>Copy these to the remote and list them back</p>
<pre><code>$ rclone -q copy plaintext secret:
$ rclone -q ls secret:
7 file1.txt
6 file0.txt
8 subdir/file2.txt
10 subdir/subsubdir/file4.txt
9 subdir/file3.txt</code></pre>
<p>Now see what that looked like when encrypted</p>
<pre><code>$ rclone -q ls remote:path
55 hagjclgavj2mbiqm6u6cnjjqcg
54 v05749mltvv1tf4onltun46gls
57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps</code></pre>
<p>Note that this retains the directory structure which means you can do this</p>
<pre><code>$ rclone -q ls secret:subdir
8 file2.txt
9 file3.txt
10 subsubdir/file4.txt</code></pre>
<p>If don't use file name encryption then the remote will look like this - note the <code>.bin</code> extensions added to prevent the cloud provider attempting to interpret the data.</p>
<pre><code>$ rclone -q ls remote:path
54 file0.txt.bin
57 subdir/file3.txt.bin
56 subdir/file2.txt.bin
58 subdir/subsubdir/file4.txt.bin
55 file1.txt.bin</code></pre>
<h3 id="file-name-encryption-modes">File name encryption modes</h3>
<p>Here are some of the features of the file name encryption modes</p>
<p>Off * doesn't hide file names or directory structure * allows for longer file names (~246 characters) * can use sub paths and copy single files</p>
<p>Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion</p>
<p>Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using &quot;Standard&quot; file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.</p>
<p>There may be an even more secure file name encryption mode in the future which will address the long file name problem.</p>
<h2 id="file-formats">File formats</h2>
<h3 id="file-encryption">File encryption</h3>
<p>Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.</p>
<h4 id="header">Header</h4>
<ul>
<li>8 bytes magic string <code>RCLONE\x00\x00</code></li>
<li>24 bytes Nonce (IV)</li>
</ul>
<p>The initial nonce is generated from the operating systems crypto strong random number genrator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is miniscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.</p>
<h4 id="chunk">Chunk</h4>
<p>Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.</p>
<p>Each chunk contains:</p>
<ul>
<li>16 Bytes of Poly1305 authenticator</li>
<li>1 - 65536 bytes XSalsa20 encrypted data</li>
</ul>
<p>64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.</p>
<p>This uses a 32 byte (256 bit key) key derived from the user password.</p>
<h4 id="examples">Examples</h4>
<p>1 byte file will encrypt to</p>
<ul>
<li>32 bytes header</li>
<li>17 bytes data chunk</li>
</ul>
<p>49 bytes total</p>
<p>1MB (1048576 bytes) file will encrypt to</p>
<ul>
<li>32 bytes header</li>
<li>16 chunks of 65568 bytes</li>
</ul>
<p>1049120 bytes total (a 0.05% overhead). This is the overhead for big files.</p>
<h3 id="name-encryption">Name encryption</h3>
<p>File names are encrypted segment by segment - the path is broken up into <code>/</code> separated strings and these are encrypted individually.</p>
<p>File segments are padded using using PKCS#7 to a multiple of 16 bytes before encryption.</p>
<p>They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper &quot;A Parallelizable Enciphering Mode&quot; by Halevi and Rogaway.</p>
<p>This makes for determinstic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.</p>
<p>This means that</p>
<ul>
<li>filenames with the same name will encrypt the same</li>
<li>filenames which start the same won't have a common prefix</li>
</ul>
<p>This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.</p>
<p>After encryption they are written out using a modified version of standard <code>base32</code> encoding as described in RFC4648. The standard encoding is modified in two ways:</p>
<ul>
<li>it becomes lower case (no-one likes upper case filenames!)</li>
<li>we strip the padding character <code>=</code></li>
</ul>
<p><code>base32</code> is used rather than the more efficient <code>base64</code> so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).</p>
<h3 id="key-derivation">Key derivation</h3>
<p>Rclone uses <code>scrypt</code> with parameters <code>N=16384, r=8, p=1</code> with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.</p>
<p><code>scrypt</code> makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.</p>
<h2 id="local-filesystem">Local Filesystem</h2>
<p>Local paths are specified as normal filesystem paths, eg <code>/path/to/wherever</code>, so</p>
<pre><code>rclone sync /home/source /tmp/destination</code></pre>
@ -2106,6 +2368,42 @@ nounc = true</code></pre>
<p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p>
<h2 id="changelog">Changelog</h2>
<ul>
<li>v1.33 - 2016-08-24
<ul>
<li>New Features</li>
<li>Implement encryption
<ul>
<li>data encrypted in NACL secretbox format</li>
<li>with optional file name encryption</li>
</ul></li>
<li>New commands
<ul>
<li>rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)</li>
<li>works on Linux, FreeBSD and OS X (need testers for the last 2!)</li>
<li>rclone cat - outputs remote file or files to the terminal</li>
<li>rclone genautocomplete - command to make a bash completion script for rclone</li>
</ul></li>
<li>Editing a remote using <code>rclone config</code> now goes through the wizard</li>
<li>Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors</li>
<li>Use cobra for sub commands and docs generation</li>
<li>drive</li>
<li>Document how to make your own client_id</li>
<li>s3</li>
<li>User-configurable Amazon S3 ACL (thanks Radek Šenfeld)</li>
<li>b2</li>
<li>Fix stats accounting for upload - no more jumping to 100% done</li>
<li>On cleanup delete hide marker if it is the current file</li>
<li>New B2 API endpoint (thanks Per Cederberg)</li>
<li>Set maximum backoff to 5 Minutes</li>
<li>onedrive</li>
<li>Fix URL escaping in file names - eg uploading files with <code>+</code> in them.</li>
<li>amazon cloud drive</li>
<li>Fix token expiry during large uploads</li>
<li>Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors</li>
<li>local</li>
<li>Fix filenames with invalid UTF-8 not being uploaded</li>
<li>Fix problem with some UTF-8 characters on OS X</li>
</ul></li>
<li>v1.32 - 2016-07-13
<ul>
<li>Backblaze B2</li>
@ -2837,6 +3135,18 @@ h='&#x6f;&#x6f;&#112;&#x73;&#46;&#x63;&#x6f;&#46;&#x61;&#116;';a='&#64;';n='&#x6
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x6f;&#102;&#102;&#x69;&#x63;&#x65;&#32;&#x61;&#116;&#32;&#x6f;&#x6f;&#112;&#x73;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#32;&#100;&#x6f;&#116;&#32;&#x61;&#116;</noscript></li>
<li>Per Cederberg <script type="text/javascript">
<!--
h='&#x67;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#x63;&#x65;&#100;&#x65;&#114;&#98;&#x65;&#114;&#x67;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x63;&#x65;&#100;&#x65;&#114;&#98;&#x65;&#114;&#x67;&#32;&#x61;&#116;&#32;&#x67;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Radek Šenfeld <script type="text/javascript">
<!--
h='&#108;&#x6f;&#x67;&#x69;&#x63;&#46;&#x63;&#122;';a='&#64;';n='&#114;&#x75;&#x73;&#104;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#114;&#x75;&#x73;&#104;&#32;&#x61;&#116;&#32;&#108;&#x6f;&#x67;&#x69;&#x63;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#122;</noscript></li>
</ul>
<h2 id="contact-the-rclone-project">Contact the rclone project</h2>
<p>The project website is at:</p>

506
MANUAL.md
View File

@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Aug 04, 2016
% Aug 24, 2016
Rclone
======
@ -564,7 +564,7 @@ rclone dedupe [mode] remote:path
### Options
```
--dedupe-mode value Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
```
## rclone authorize
@ -583,6 +583,33 @@ rclone config.
rclone authorize
```
## rclone cat
Concatenates any files and sends them to stdout.
### Synopsis
rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
```
rclone cat remote:path
```
## rclone genautocomplete
Output bash completion script for rclone.
@ -627,6 +654,83 @@ rclone.org website.
rclone gendocs output_directory
```
## rclone mount
Mount the remote as a mountpoint. **EXPERIMENTAL**
### Synopsis
rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
cloud storage systems as a file system with FUSE.
This is **EXPERIMENTAL** - use with care.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount &
Stop the mount with
fusermount -u /path/to/local/mount
Or with OS X
umount -u /path/to/local/mount
### Limitations ###
This can only read files seqentially, or write files sequentially. It
can't read and write or seek in files.
rclonefs inherits rclone's directory handling. In rclone's world
directories don't really exist. This means that empty directories
will have a tendency to disappear once they fall out of the directory
cache.
The bucket based FSes (eg swift, s3, google compute storage, b2) won't
work from the root - you will need to specify a bucket, or a path
within the bucket. So `swift:` won't work whereas `swift:bucket` will
as will `swift:bucket/path`.
Only supported on Linux, FreeBSD and OS X at the moment.
### rclone mount vs rclone sync/copy ##
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. This might happen in the future, but for the moment rclone
mount won't do that, so will be less reliable than the rclone command.
### Bugs ###
* All the remotes should work for read, but some may not for write
* those which need to know the size in advance won't - eg B2
* maybe should pass in size as -1 to mean work it out
### TODO ###
* Check hashes on upload/download
* Preserve timestamps
* Move directories
```
rclone mount remote:path /path/to/mountpoint
```
### Options
```
--debug-fuse Debug the FUSE internals - needs -v.
--no-modtime Don't read the modification time (can speed things up).
```
Copying single files
--------------------
@ -1912,6 +2016,44 @@ limited to transferring about 2 files per second only. Individual
files may be transferred much faster at 100s of MBytes/s but lots of
small files can take a long time.
### Making your own client_id ###
When you use rclone with Google drive in its default configuration you
are using rclone's client_id. This is shared between all the rclone
users. There is a global rate limit on the number of queries per
second that each client_id can do set by Google. rclone already has a
high quota and I will continue to make sure it is high enough by
contacting Google.
However you might find you get better performance making your own
client_id if you are a heavy user. Or you may not depending on exactly
how Google have been raising rclone's rate limit.
Here is how to create your own Google Drive client ID for rclone:
1. Log into the [Google API
Console](https://console.developers.google.com/) with your Google
account. It doesn't matter what Google account you use. (It need not
be the same account as the Google Drive you want to access)
2. Select a project or create a new project.
3. Under Overview, Google APIs, Google Apps APIs, click "Drive API",
then "Enable".
4. Click "Credentials" in the left-side panel (not "Go to
credentials", which opens the wizard), then "Create credentials", then
"OAuth client ID". It will prompt you to set the OAuth consent screen
product name, if you haven't set one already.
5. Choose an application type of "other", and click "Create". (the
default name is fine)
6. It will show you a client ID and client secret. Use these values
in rclone config to add a new remote or edit an existing remote.
(Thanks to @balazer on github for these instructions.)
Amazon S3
---------------------------------------
@ -2029,6 +2171,25 @@ Choose a number from below, or type in your own value
9 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl> private
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
@ -2817,6 +2978,14 @@ To download files above this threshold, rclone requests a `tempLink`
which downloads the file through a temporary URL directly from the
underlying S3 storage.
#### --acd-upload-wait-time=TIME ####
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
controls the time rclone waits - 2 minutes by default. You might want
to increase the time if you are having problems with very big files.
Upload with the `-v` flag for more info.
### Limitations ###
Note that Amazon Drive is case insensitive so you can't have a
@ -3458,6 +3627,307 @@ metadata called `rclone_modified` in RFC3339 with nanoseconds format.
MD5 checksums are natively supported by Yandex Disk.
Crypt
----------------------------------------
The `crypt` remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config
instructions for that remote. You can also use a local pathname
instead of a remote which will encrypt and decrypt from that directory
which might be useful for encrypting onto a USB stick for example.
First check your chosen remote is working - we'll call it
`remote:path` in these docs. Note that anything inside `remote:path`
will be encrypted and anything outside won't. This means that if you
are using a bucket based remote (eg S3, B2, swift) then you should
probably put the bucket in the remote `s3:bucket`. If you just use
`s3:` then rclone will make encrypted bucket names too (if using file
name encryption) which may or may not be what you want.
Now configure `crypt` using `rclone config`. We will call this one
`secret` to differentiate it from the `remote`.
```
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> secret
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
7 / Google Drive
\ "drive"
8 / Hubic
\ "hubic"
9 / Local Disk
\ "local"
10 / Microsoft OneDrive
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
12 / Yandex Disk
\ "yandex"
Storage> 5
Remote to encrypt/decrypt.
remote> remote:path
How to encrypt the filenames.
Choose a number from below, or type in your own value
1 / Don't encrypt the file names. Adds a ".bin" extension only.
\ "off"
2 / Encrypt the filenames see the docs for the details.
\ "standard"
filename_encryption> 2
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> g
Password strength in bits.
64 is just about memorable
128 is secure
1024 is the maximum
Bits> 128
Your password is: JAsJvRcgR-_veXNfy_sGmQ
Use this password?
y) Yes
n) No
y/n> y
Remote config
--------------------
[secret]
remote = remote:path
filename_encryption = standard
password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw
password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
**Important** The password is stored in the config file is lightly
obscured so it isn't immediately obvious what it is. It is in no way
secure unless you use config file encryption.
A long passphrase is recommended, or you can use a random one. Note
that if you reconfigure rclone with the same passwords/passphrases
elsewhere it will be compatible - all the secrets used are derived
from those two passwords/passphrases.
Note that rclone does not encrypt
* file length - this can be calcuated within 16 bytes
* modification time - used for syncing
## Example ##
To test I made a little directory of files using "standard" file name
encryption.
```
plaintext/
├── file0.txt
├── file1.txt
└── subdir
├── file2.txt
├── file3.txt
└── subsubdir
└── file4.txt
```
Copy these to the remote and list them back
```
$ rclone -q copy plaintext secret:
$ rclone -q ls secret:
7 file1.txt
6 file0.txt
8 subdir/file2.txt
10 subdir/subsubdir/file4.txt
9 subdir/file3.txt
```
Now see what that looked like when encrypted
```
$ rclone -q ls remote:path
55 hagjclgavj2mbiqm6u6cnjjqcg
54 v05749mltvv1tf4onltun46gls
57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
```
Note that this retains the directory structure which means you can do this
```
$ rclone -q ls secret:subdir
8 file2.txt
9 file3.txt
10 subsubdir/file4.txt
```
If don't use file name encryption then the remote will look like this
- note the `.bin` extensions added to prevent the cloud provider
attempting to interpret the data.
```
$ rclone -q ls remote:path
54 file0.txt.bin
57 subdir/file3.txt.bin
56 subdir/file2.txt.bin
58 subdir/subsubdir/file4.txt.bin
55 file1.txt.bin
```
### File name encryption modes ###
Here are some of the features of the file name encryption modes
Off
* doesn't hide file names or directory structure
* allows for longer file names (~246 characters)
* can use sub paths and copy single files
Standard
* file names encrypted
* file names can't be as long (~156 characters)
* can use sub paths and copy single files
* directory structure visibile
* identical files names will have identical uploaded names
* can use shortcuts to shorten the directory recursion
Cloud storage systems have various limits on file name length and
total path length which you are more likely to hit using "Standard"
file name encryption. If you keep your file names to below 156
characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the
future which will address the long file name problem.
## File formats ##
### File encryption ###
Files are encrypted 1:1 source file to destination object. The file
has a header and is divided into chunks.
#### Header ####
* 8 bytes magic string `RCLONE\x00\x00`
* 24 bytes Nonce (IV)
The initial nonce is generated from the operating systems crypto
strong random number genrator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
The chance of a nonce being re-used is miniscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of re-using a nonce.
#### Chunk ####
Each chunk will contain 64kB of data, except for the last one which
may have less data. The data chunk is in standard NACL secretbox
format. Secretbox uses XSalsa20 and Poly1305 to encrypt and
authenticate messages.
Each chunk contains:
* 16 Bytes of Poly1305 authenticator
* 1 - 65536 bytes XSalsa20 encrypted data
64k chunk size was chosen as the best performing chunk size (the
authenticator takes too much time below this and the performance drops
off due to cache effects above this). Note that these chunks are
buffered in memory so they can't be too big.
This uses a 32 byte (256 bit key) key derived from the user password.
#### Examples ####
1 byte file will encrypt to
* 32 bytes header
* 17 bytes data chunk
49 bytes total
1MB (1048576 bytes) file will encrypt to
* 32 bytes header
* 16 chunks of 65568 bytes
1049120 bytes total (a 0.05% overhead). This is the overhead for big
files.
### Name encryption ###
File names are encrypted segment by segment - the path is broken up
into `/` separated strings and these are encrypted individually.
File segments are padded using using PKCS#7 to a multiple of 16 bytes
before encryption.
They are then encrypted with EME using AES with 256 bit key. EME
(ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003
paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
This makes for determinstic encryption which is what we want - the
same filename must encrypt to the same thing otherwise we can't find
it on the cloud storage system.
This means that
* filenames with the same name will encrypt the same
* filenames which start the same won't have a common prefix
This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of
which are derived from the user password.
After encryption they are written out using a modified version of
standard `base32` encoding as described in RFC4648. The standard
encoding is modified in two ways:
* it becomes lower case (no-one likes upper case filenames!)
* we strip the padding character `=`
`base32` is used rather than the more efficient `base64` so rclone can be
used on case insensitive remotes (eg Windows, Amazon Drive).
### Key derivation ###
Rclone uses `scrypt` with parameters `N=16384, r=8, p=1` with a an
optional user supplied salt (password2) to derive the 32+32+16 = 80
bytes of key material required. If the user doesn't supply a salt
then rclone uses an internal one.
`scrypt` makes it impractical to mount a dictionary attack on rclone
encrypted data. For full protection agains this you should always use
a salt.
Local Filesystem
-------------------------------------------
@ -3532,6 +4002,36 @@ file exceeds 258 characters on z, so only use this option if you have to.
Changelog
---------
* v1.33 - 2016-08-24
* New Features
* Implement encryption
* data encrypted in NACL secretbox format
* with optional file name encryption
* New commands
* rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
* works on Linux, FreeBSD and OS X (need testers for the last 2!)
* rclone cat - outputs remote file or files to the terminal
* rclone genautocomplete - command to make a bash completion script for rclone
* Editing a remote using `rclone config` now goes through the wizard
* Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors
* Use cobra for sub commands and docs generation
* drive
* Document how to make your own client_id
* s3
* User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
* b2
* Fix stats accounting for upload - no more jumping to 100% done
* On cleanup delete hide marker if it is the current file
* New B2 API endpoint (thanks Per Cederberg)
* Set maximum backoff to 5 Minutes
* onedrive
* Fix URL escaping in file names - eg uploading files with `+` in them.
* amazon cloud drive
* Fix token expiry during large uploads
* Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
* local
* Fix filenames with invalid UTF-8 not being uploaded
* Fix problem with some UTF-8 characters on OS X
* v1.32 - 2016-07-13
* Backblaze B2
* Fix upload of files large files not in root
@ -4154,6 +4654,8 @@ Contributors
* Justin R. Wilson <jrw972@gmail.com>
* Antonio Messina <antonio.s.messina@gmail.com>
* Stefan G. Weichinger <office@oops.co.at>
* Per Cederberg <cederberg@gmail.com>
* Radek Šenfeld <rush@logic.cz>
Contact the rclone project
--------------------------

View File

@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Aug 04, 2016
Aug 24, 2016
@ -511,7 +511,7 @@ Or
Options
--dedupe-mode value Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
rclone authorize
@ -526,6 +526,29 @@ a machine with a browser - use as instructed by rclone config.
rclone authorize
rclone cat
Concatenates any files and sends them to stdout.
Synopsis
rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
rclone cat remote:path
rclone genautocomplete
Output bash completion script for rclone.
@ -562,6 +585,77 @@ rclone.org website.
rclone gendocs output_directory
rclone mount
Mount the remote as a mountpoint. EXPERIMENTAL
Synopsis
rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config. Check it works with
rclone ls etc.
Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount &
Stop the mount with
fusermount -u /path/to/local/mount
Or with OS X
umount -u /path/to/local/mount
Limitations
This can only read files seqentially, or write files sequentially. It
can't read and write or seek in files.
rclonefs inherits rclone's directory handling. In rclone's world
directories don't really exist. This means that empty directories will
have a tendency to disappear once they fall out of the directory cache.
The bucket based FSes (eg swift, s3, google compute storage, b2) won't
work from the root - you will need to specify a bucket, or a path within
the bucket. So swift: won't work whereas swift:bucket will as will
swift:bucket/path.
Only supported on Linux, FreeBSD and OS X at the moment.
rclone mount vs rclone sync/copy
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy commands
cope with this with lots of retries. However rclone mount can't use
retries in the same way without making local copies of the uploads. This
might happen in the future, but for the moment rclone mount won't do
that, so will be less reliable than the rclone command.
Bugs
- All the remotes should work for read, but some may not for write
- those which need to know the size in advance won't - eg B2
- maybe should pass in size as -1 to mean work it out
TODO
- Check hashes on upload/download
- Preserve timestamps
- Move directories
rclone mount remote:path /path/to/mountpoint
Options
--debug-fuse Debug the FUSE internals - needs -v.
--no-modtime Don't read the modification time (can speed things up).
Copying single files
rclone normally syncs or copies directories. However if the source
@ -1894,6 +1988,43 @@ to transferring about 2 files per second only. Individual files may be
transferred much faster at 100s of MBytes/s but lots of small files can
take a long time.
Making your own client_id
When you use rclone with Google drive in its default configuration you
are using rclone's client_id. This is shared between all the rclone
users. There is a global rate limit on the number of queries per second
that each client_id can do set by Google. rclone already has a high
quota and I will continue to make sure it is high enough by contacting
Google.
However you might find you get better performance making your own
client_id if you are a heavy user. Or you may not depending on exactly
how Google have been raising rclone's rate limit.
Here is how to create your own Google Drive client ID for rclone:
1. Log into the Google API Console with your Google account. It doesn't
matter what Google account you use. (It need not be the same account
as the Google Drive you want to access)
2. Select a project or create a new project.
3. Under Overview, Google APIs, Google Apps APIs, click "Drive API",
then "Enable".
4. Click "Credentials" in the left-side panel (not "Go to credentials",
which opens the wizard), then "Create credentials", then "OAuth
client ID". It will prompt you to set the OAuth consent screen
product name, if you haven't set one already.
5. Choose an application type of "other", and click "Create". (the
default name is fine)
6. It will show you a client ID and client secret. Use these values in
rclone config to add a new remote or edit an existing remote.
(Thanks to @balazer on github for these instructions.)
Amazon S3
@ -2010,6 +2141,25 @@ This will guide you through an interactive setup process.
9 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl> private
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
@ -2773,6 +2923,14 @@ To download files above this threshold, rclone requests a tempLink which
downloads the file through a temporary URL directly from the underlying
S3 storage.
--acd-upload-wait-time=TIME
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This controls
the time rclone waits - 2 minutes by default. You might want to increase
the time if you are having problems with very big files. Upload with the
-v flag for more info.
Limitations
Note that Amazon Drive is case insensitive so you can't have a file
@ -3390,6 +3548,292 @@ MD5 checksums
MD5 checksums are natively supported by Yandex Disk.
Crypt
The crypt remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config
instructions for that remote. You can also use a local pathname instead
of a remote which will encrypt and decrypt from that directory which
might be useful for encrypting onto a USB stick for example.
First check your chosen remote is working - we'll call it remote:path in
these docs. Note that anything inside remote:path will be encrypted and
anything outside won't. This means that if you are using a bucket based
remote (eg S3, B2, swift) then you should probably put the bucket in the
remote s3:bucket. If you just use s3: then rclone will make encrypted
bucket names too (if using file name encryption) which may or may not be
what you want.
Now configure crypt using rclone config. We will call this one secret to
differentiate it from the remote.
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> secret
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
7 / Google Drive
\ "drive"
8 / Hubic
\ "hubic"
9 / Local Disk
\ "local"
10 / Microsoft OneDrive
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
12 / Yandex Disk
\ "yandex"
Storage> 5
Remote to encrypt/decrypt.
remote> remote:path
How to encrypt the filenames.
Choose a number from below, or type in your own value
1 / Don't encrypt the file names. Adds a ".bin" extension only.
\ "off"
2 / Encrypt the filenames see the docs for the details.
\ "standard"
filename_encryption> 2
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> g
Password strength in bits.
64 is just about memorable
128 is secure
1024 is the maximum
Bits> 128
Your password is: JAsJvRcgR-_veXNfy_sGmQ
Use this password?
y) Yes
n) No
y/n> y
Remote config
--------------------
[secret]
remote = remote:path
filename_encryption = standard
password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw
password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
IMPORTANT The password is stored in the config file is lightly obscured
so it isn't immediately obvious what it is. It is in no way secure
unless you use config file encryption.
A long passphrase is recommended, or you can use a random one. Note that
if you reconfigure rclone with the same passwords/passphrases elsewhere
it will be compatible - all the secrets used are derived from those two
passwords/passphrases.
Note that rclone does not encrypt * file length - this can be calcuated
within 16 bytes * modification time - used for syncing
Example
To test I made a little directory of files using "standard" file name
encryption.
plaintext/
├── file0.txt
├── file1.txt
└── subdir
├── file2.txt
├── file3.txt
└── subsubdir
└── file4.txt
Copy these to the remote and list them back
$ rclone -q copy plaintext secret:
$ rclone -q ls secret:
7 file1.txt
6 file0.txt
8 subdir/file2.txt
10 subdir/subsubdir/file4.txt
9 subdir/file3.txt
Now see what that looked like when encrypted
$ rclone -q ls remote:path
55 hagjclgavj2mbiqm6u6cnjjqcg
54 v05749mltvv1tf4onltun46gls
57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
Note that this retains the directory structure which means you can do
this
$ rclone -q ls secret:subdir
8 file2.txt
9 file3.txt
10 subsubdir/file4.txt
If don't use file name encryption then the remote will look like this -
note the .bin extensions added to prevent the cloud provider attempting
to interpret the data.
$ rclone -q ls remote:path
54 file0.txt.bin
57 subdir/file3.txt.bin
56 subdir/file2.txt.bin
58 subdir/subsubdir/file4.txt.bin
55 file1.txt.bin
File name encryption modes
Here are some of the features of the file name encryption modes
Off * doesn't hide file names or directory structure * allows for longer
file names (~246 characters) * can use sub paths and copy single files
Standard * file names encrypted * file names can't be as long (~156
characters) * can use sub paths and copy single files * directory
structure visibile * identical files names will have identical uploaded
names * can use shortcuts to shorten the directory recursion
Cloud storage systems have various limits on file name length and total
path length which you are more likely to hit using "Standard" file name
encryption. If you keep your file names to below 156 characters in
length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future
which will address the long file name problem.
File formats
File encryption
Files are encrypted 1:1 source file to destination object. The file has
a header and is divided into chunks.
Header
- 8 bytes magic string RCLONE\x00\x00
- 24 bytes Nonce (IV)
The initial nonce is generated from the operating systems crypto strong
random number genrator. The nonce is incremented for each chunk read
making sure each nonce is unique for each block written. The chance of a
nonce being re-used is miniscule. If you wrote an exabyte of data (10¹⁸
bytes) you would have a probability of approximately 2×10⁻³² of re-using
a nonce.
Chunk
Each chunk will contain 64kB of data, except for the last one which may
have less data. The data chunk is in standard NACL secretbox format.
Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate
messages.
Each chunk contains:
- 16 Bytes of Poly1305 authenticator
- 1 - 65536 bytes XSalsa20 encrypted data
64k chunk size was chosen as the best performing chunk size (the
authenticator takes too much time below this and the performance drops
off due to cache effects above this). Note that these chunks are
buffered in memory so they can't be too big.
This uses a 32 byte (256 bit key) key derived from the user password.
Examples
1 byte file will encrypt to
- 32 bytes header
- 17 bytes data chunk
49 bytes total
1MB (1048576 bytes) file will encrypt to
- 32 bytes header
- 16 chunks of 65568 bytes
1049120 bytes total (a 0.05% overhead). This is the overhead for big
files.
Name encryption
File names are encrypted segment by segment - the path is broken up into
/ separated strings and these are encrypted individually.
File segments are padded using using PKCS#7 to a multiple of 16 bytes
before encryption.
They are then encrypted with EME using AES with 256 bit key. EME
(ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003
paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
This makes for determinstic encryption which is what we want - the same
filename must encrypt to the same thing otherwise we can't find it on
the cloud storage system.
This means that
- filenames with the same name will encrypt the same
- filenames which start the same won't have a common prefix
This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of
which are derived from the user password.
After encryption they are written out using a modified version of
standard base32 encoding as described in RFC4648. The standard encoding
is modified in two ways:
- it becomes lower case (no-one likes upper case filenames!)
- we strip the padding character =
base32 is used rather than the more efficient base64 so rclone can be
used on case insensitive remotes (eg Windows, Amazon Drive).
Key derivation
Rclone uses scrypt with parameters N=16384, r=8, p=1 with a an optional
user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key
material required. If the user doesn't supply a salt then rclone uses an
internal one.
scrypt makes it impractical to mount a dictionary attack on rclone
encrypted data. For full protection agains this you should always use a
salt.
Local Filesystem
Local paths are specified as normal filesystem paths, eg
@ -3459,6 +3903,41 @@ characters on z, so only use this option if you have to.
Changelog
- v1.33 - 2016-08-24
- New Features
- Implement encryption
- data encrypted in NACL secretbox format
- with optional file name encryption
- New commands
- rclone mount - implements FUSE mounting of
remotes (EXPERIMENTAL)
- works on Linux, FreeBSD and OS X (need testers for the
last 2!)
- rclone cat - outputs remote file or files to the terminal
- rclone genautocomplete - command to make a bash completion
script for rclone
- Editing a remote using rclone config now goes through the wizard
- Compile with go 1.7 - this fixes rclone on macOS Sierra and on
386 processors
- Use cobra for sub commands and docs generation
- drive
- Document how to make your own client_id
- s3
- User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
- b2
- Fix stats accounting for upload - no more jumping to 100% done
- On cleanup delete hide marker if it is the current file
- New B2 API endpoint (thanks Per Cederberg)
- Set maximum backoff to 5 Minutes
- onedrive
- Fix URL escaping in file names - eg uploading files with +
in them.
- amazon cloud drive
- Fix token expiry during large uploads
- Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
- local
- Fix filenames with invalid UTF-8 not being uploaded
- Fix problem with some UTF-8 characters on OS X
- v1.32 - 2016-07-13
- Backblaze B2
- Fix upload of files large files not in root
@ -4126,6 +4605,8 @@ Contributors
- Justin R. Wilson jrw972@gmail.com
- Antonio Messina antonio.s.messina@gmail.com
- Stefan G. Weichinger office@oops.co.at
- Per Cederberg cederberg@gmail.com
- Radek Šenfeld rush@logic.cz
Contact the rclone project

View File

@ -1,12 +1,42 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2016-07-13"
date: "2016-08-24"
---
Changelog
---------
* v1.33 - 2016-08-24
* New Features
* Implement encryption
* data encrypted in NACL secretbox format
* with optional file name encryption
* New commands
* rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
* works on Linux, FreeBSD and OS X (need testers for the last 2!)
* rclone cat - outputs remote file or files to the terminal
* rclone genautocomplete - command to make a bash completion script for rclone
* Editing a remote using `rclone config` now goes through the wizard
* Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors
* Use cobra for sub commands and docs generation
* drive
* Document how to make your own client_id
* s3
* User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
* b2
* Fix stats accounting for upload - no more jumping to 100% done
* On cleanup delete hide marker if it is the current file
* New B2 API endpoint (thanks Per Cederberg)
* Set maximum backoff to 5 Minutes
* onedrive
* Fix URL escaping in file names - eg uploading files with `+` in them.
* amazon cloud drive
* Fix token expiry during large uploads
* Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
* local
* Fix filenames with invalid UTF-8 not being uploaded
* Fix problem with some UTF-8 characters on OS X
* v1.32 - 2016-07-13
* Backblaze B2
* Fix upload of files large files not in root

View File

@ -1,12 +1,12 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
Sync files and directories to and from local and remote object stores - v1.32
Sync files and directories to and from local and remote object stores - v1.33-DEV
### Synopsis
@ -50,72 +50,74 @@ rclone
### Options
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
-V, --version Print the version number
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
-V, --version Print the version number
```
### SEE ALSO
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
@ -129,6 +131,7 @@ rclone
* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path.
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
@ -137,4 +140,4 @@ rclone
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@ -23,70 +23,71 @@ rclone authorize
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@ -26,70 +26,71 @@ rclone check source:path dest:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@ -23,70 +23,71 @@ rclone cleanup remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@ -20,70 +20,71 @@ rclone config
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@ -59,70 +59,71 @@ rclone copy source:path dest:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@ -95,76 +95,77 @@ rclone dedupe [mode] remote:path
### Options
```
--dedupe-mode value Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
```
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@ -37,70 +37,71 @@ rclone delete remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@ -35,70 +35,71 @@ rclone genautocomplete [output_file]
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@ -23,70 +23,71 @@ rclone gendocs output_directory
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@ -20,70 +20,71 @@ rclone ls remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@ -20,70 +20,71 @@ rclone lsd remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@ -20,70 +20,71 @@ rclone lsl remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@ -23,70 +23,71 @@ rclone md5sum remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@ -20,70 +20,71 @@ rclone mkdir remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@ -36,70 +36,71 @@ rclone move source:path dest:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@ -24,70 +24,71 @@ rclone purge remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@ -22,70 +22,71 @@ rclone rmdir remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@ -23,70 +23,71 @@ rclone sha1sum remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@ -20,70 +20,71 @@ rclone size remote:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@ -39,70 +39,71 @@ rclone sync source:path dest:path
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -1,5 +1,5 @@
---
date: 2016-08-04T21:37:09+01:00
date: 2016-08-24T23:01:36+01:00
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@ -20,70 +20,71 @@ rclone version
### Options inherited from parent commands
```
--acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G)
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 4-Aug-2016
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@ -2,40 +2,40 @@
title: "Rclone downloads"
description: "Download rclone binaries for your OS."
type: page
date: "2016-07-13"
date: "2016-08-24"
---
Rclone Download v1.32
Rclone Download v1.33
=====================
* Windows
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.32-windows-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-windows-amd64.zip)
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-windows-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-windows-amd64.zip)
* OSX
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.32-osx-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-osx-amd64.zip)
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-osx-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-osx-amd64.zip)
* Linux
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.32-linux-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-linux-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.32-linux-arm.zip)
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-linux-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-linux-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.33-linux-arm.zip)
* FreeBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.32-freebsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-freebsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.32-freebsd-arm.zip)
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-freebsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-freebsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.33-freebsd-arm.zip)
* NetBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.32-netbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-netbsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.32-netbsd-arm.zip)
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-netbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-netbsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.33-netbsd-arm.zip)
* OpenBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.32-openbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-openbsd-amd64.zip)
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-openbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-openbsd-amd64.zip)
* Plan 9
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.32-plan9-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-plan9-amd64.zip)
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.33-plan9-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-plan9-amd64.zip)
* Solaris
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.32-solaris-amd64.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.33-solaris-amd64.zip)
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.32).
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.33).
Downloads for scripting
=======================

View File

@ -1,4 +1,4 @@
package fs
// Version of rclone
var Version = "v1.32-DEV"
var Version = "v1.33-DEV"

View File

@ -29,6 +29,7 @@ docs = [
"hubic.md",
"b2.md",
"yandex.md",
"crypt.md",
"local.md",
"changelog.md",
"bugs.md",

589
rclone.1
View File

@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 1.16.0.2
.\"
.TH "rclone" "1" "Aug 04, 2016" "User Manual" ""
.TH "rclone" "1" "Aug 24, 2016" "User Manual" ""
.hy
.SH Rclone
.PP
@ -645,7 +645,7 @@ rclone\ dedupe\ [mode]\ remote:path
.IP
.nf
\f[C]
\ \ \ \ \ \ \-\-dedupe\-mode\ value\ \ \ Dedupe\ mode\ interactive|skip|first|newest|oldest|rename.\ (default\ "interactive")
\ \ \ \ \ \ \-\-dedupe\-mode\ string\ \ \ Dedupe\ mode\ interactive|skip|first|newest|oldest|rename.
\f[]
.fi
.SS rclone authorize
@ -662,6 +662,42 @@ browser \- use as instructed by rclone config.
rclone\ authorize
\f[]
.fi
.SS rclone cat
.PP
Concatenates any files and sends them to stdout.
.SS Synopsis
.PP
rclone cat sends any files to standard output.
.PP
You can use it like this to output a single file
.IP
.nf
\f[C]
rclone\ cat\ remote:path/to/file
\f[]
.fi
.PP
Or like this to output any file in dir or subdirectories.
.IP
.nf
\f[C]
rclone\ cat\ remote:path/to/dir
\f[]
.fi
.PP
Or like this to output any .txt files in dir or subdirectories.
.IP
.nf
\f[C]
rclone\ \-\-include\ "*.txt"\ cat\ remote:path/to/dir
\f[]
.fi
.IP
.nf
\f[C]
rclone\ cat\ remote:path
\f[]
.fi
.SS rclone genautocomplete
.PP
Output bash completion script for rclone.
@ -709,6 +745,99 @@ website.
rclone\ gendocs\ output_directory
\f[]
.fi
.SS rclone mount
.PP
Mount the remote as a mountpoint.
\f[B]EXPERIMENTAL\f[]
.SS Synopsis
.PP
rclone mount allows Linux, FreeBSD and macOS to mount any of
Rclone\[aq]s cloud storage systems as a file system with FUSE.
.PP
This is \f[B]EXPERIMENTAL\f[] \- use with care.
.PP
First set up your remote using \f[C]rclone\ config\f[].
Check it works with \f[C]rclone\ ls\f[] etc.
.PP
Start the mount like this
.IP
.nf
\f[C]
rclone\ mount\ remote:path/to/files\ /path/to/local/mount\ &
\f[]
.fi
.PP
Stop the mount with
.IP
.nf
\f[C]
fusermount\ \-u\ /path/to/local/mount
\f[]
.fi
.PP
Or with OS X
.IP
.nf
\f[C]
umount\ \-u\ /path/to/local/mount
\f[]
.fi
.SS Limitations
.PP
This can only read files seqentially, or write files sequentially.
It can\[aq]t read and write or seek in files.
.PP
rclonefs inherits rclone\[aq]s directory handling.
In rclone\[aq]s world directories don\[aq]t really exist.
This means that empty directories will have a tendency to disappear once
they fall out of the directory cache.
.PP
The bucket based FSes (eg swift, s3, google compute storage, b2)
won\[aq]t work from the root \- you will need to specify a bucket, or a
path within the bucket.
So \f[C]swift:\f[] won\[aq]t work whereas \f[C]swift:bucket\f[] will as
will \f[C]swift:bucket/path\f[].
.PP
Only supported on Linux, FreeBSD and OS X at the moment.
.SS rclone mount vs rclone sync/copy
.PP
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable.
The rclone sync/copy commands cope with this with lots of retries.
However rclone mount can\[aq]t use retries in the same way without
making local copies of the uploads.
This might happen in the future, but for the moment rclone mount
won\[aq]t do that, so will be less reliable than the rclone command.
.SS Bugs
.IP \[bu] 2
All the remotes should work for read, but some may not for write
.RS 2
.IP \[bu] 2
those which need to know the size in advance won\[aq]t \- eg B2
.IP \[bu] 2
maybe should pass in size as \-1 to mean work it out
.RE
.SS TODO
.IP \[bu] 2
Check hashes on upload/download
.IP \[bu] 2
Preserve timestamps
.IP \[bu] 2
Move directories
.IP
.nf
\f[C]
rclone\ mount\ remote:path\ /path/to/mountpoint
\f[]
.fi
.SS Options
.IP
.nf
\f[C]
\ \ \ \ \ \ \-\-debug\-fuse\ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v.
\ \ \ \ \ \ \-\-no\-modtime\ \ \ Don\[aq]t\ read\ the\ modification\ time\ (can\ speed\ things\ up).
\f[]
.fi
.SS Copying single files
.PP
rclone normally syncs or copies directories.
@ -2415,6 +2544,47 @@ This causes rclone to be limited to transferring about 2 files per
second only.
Individual files may be transferred much faster at 100s of MBytes/s but
lots of small files can take a long time.
.SS Making your own client_id
.PP
When you use rclone with Google drive in its default configuration you
are using rclone\[aq]s client_id.
This is shared between all the rclone users.
There is a global rate limit on the number of queries per second that
each client_id can do set by Google.
rclone already has a high quota and I will continue to make sure it is
high enough by contacting Google.
.PP
However you might find you get better performance making your own
client_id if you are a heavy user.
Or you may not depending on exactly how Google have been raising
rclone\[aq]s rate limit.
.PP
Here is how to create your own Google Drive client ID for rclone:
.IP "1." 3
Log into the Google API Console (https://console.developers.google.com/)
with your Google account.
It doesn\[aq]t matter what Google account you use.
(It need not be the same account as the Google Drive you want to access)
.IP "2." 3
Select a project or create a new project.
.IP "3." 3
Under Overview, Google APIs, Google Apps APIs, click "Drive API", then
"Enable".
.IP "4." 3
Click "Credentials" in the left\-side panel (not "Go to credentials",
which opens the wizard), then "Create credentials", then "OAuth client
ID".
It will prompt you to set the OAuth consent screen product name, if you
haven\[aq]t set one already.
.IP "5." 3
Choose an application type of "other", and click "Create".
(the default name is fine)
.IP "6." 3
It will show you a client ID and client secret.
Use these values in rclone config to add a new remote or edit an
existing remote.
.PP
(Thanks to \@balazer on github for these instructions.)
.SS Amazon S3
.PP
Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for
@ -2538,6 +2708,25 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 9\ /\ South\ America\ (Sao\ Paulo)\ Region.
\ \ \ \\\ "sa\-east\-1"
location_constraint>\ 1
Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3.
For\ more\ info\ visit\ http://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default).
\ \ \ \\\ "private"
\ 2\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ access.
\ \ \ \\\ "public\-read"
\ \ \ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ and\ WRITE\ access.
\ 3\ |\ Granting\ this\ on\ a\ bucket\ is\ generally\ not\ recommended.
\ \ \ \\\ "public\-read\-write"
\ 4\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AuthenticatedUsers\ group\ gets\ READ\ access.
\ \ \ \\\ "authenticated\-read"
\ \ \ /\ Object\ owner\ gets\ FULL_CONTROL.\ Bucket\ owner\ gets\ READ\ access.
\ 5\ |\ If\ you\ specify\ this\ canned\ ACL\ when\ creating\ a\ bucket,\ Amazon\ S3\ ignores\ it.
\ \ \ \\\ "bucket\-owner\-read"
\ \ \ /\ Both\ the\ object\ owner\ and\ the\ bucket\ owner\ get\ FULL_CONTROL\ over\ the\ object.
\ 6\ |\ If\ you\ specify\ this\ canned\ ACL\ when\ creating\ a\ bucket,\ Amazon\ S3\ ignores\ it.
\ \ \ \\\ "bucket\-owner\-full\-control"
acl>\ private
The\ server\-side\ encryption\ algorithm\ used\ when\ storing\ this\ object\ in\ S3.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ None
@ -3451,6 +3640,14 @@ The default for this is 9GB which shouldn\[aq]t need to be changed.
To download files above this threshold, rclone requests a
\f[C]tempLink\f[] which downloads the file through a temporary URL
directly from the underlying S3 storage.
.SS \-\-acd\-upload\-wait\-time=TIME
.PP
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while.
This controls the time rclone waits \- 2 minutes by default.
You might want to increase the time if you are having problems with very
big files.
Upload with the \f[C]\-v\f[] flag for more info.
.SS Limitations
.PP
Note that Amazon Drive is case insensitive so you can\[aq]t have a file
@ -4186,6 +4383,320 @@ format.
.SS MD5 checksums
.PP
MD5 checksums are natively supported by Yandex Disk.
.SS Crypt
.PP
The \f[C]crypt\f[] remote encrypts and decrypts another remote.
.PP
To use it first set up the underlying remote following the config
instructions for that remote.
You can also use a local pathname instead of a remote which will encrypt
and decrypt from that directory which might be useful for encrypting
onto a USB stick for example.
.PP
First check your chosen remote is working \- we\[aq]ll call it
\f[C]remote:path\f[] in these docs.
Note that anything inside \f[C]remote:path\f[] will be encrypted and
anything outside won\[aq]t.
This means that if you are using a bucket based remote (eg S3, B2,
swift) then you should probably put the bucket in the remote
\f[C]s3:bucket\f[].
If you just use \f[C]s3:\f[] then rclone will make encrypted bucket
names too (if using file name encryption) which may or may not be what
you want.
.PP
Now configure \f[C]crypt\f[] using \f[C]rclone\ config\f[].
We will call this one \f[C]secret\f[] to differentiate it from the
\f[C]remote\f[].
.IP
.nf
\f[C]
No\ remotes\ found\ \-\ make\ a\ new\ one
n)\ New\ remote
s)\ Set\ configuration\ password
q)\ Quit\ config
n/s/q>\ n\ \ \
name>\ secret
Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
\ 5\ /\ Encrypt/Decrypt\ a\ remote
\ \ \ \\\ "crypt"
\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
12\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
Storage>\ 5
Remote\ to\ encrypt/decrypt.
remote>\ remote:path
How\ to\ encrypt\ the\ filenames.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Don\[aq]t\ encrypt\ the\ file\ names.\ \ Adds\ a\ ".bin"\ extension\ only.
\ \ \ \\\ "off"
\ 2\ /\ Encrypt\ the\ filenames\ see\ the\ docs\ for\ the\ details.
\ \ \ \\\ "standard"
filename_encryption>\ 2
Password\ or\ pass\ phrase\ for\ encryption.
y)\ Yes\ type\ in\ my\ own\ password
g)\ Generate\ random\ password
y/g>\ y
Enter\ the\ password:
password:
Confirm\ the\ password:
password:
Password\ or\ pass\ phrase\ for\ salt.\ Optional\ but\ recommended.
Should\ be\ different\ to\ the\ previous\ password.
y)\ Yes\ type\ in\ my\ own\ password
g)\ Generate\ random\ password
n)\ No\ leave\ this\ optional\ password\ blank
y/g/n>\ g
Password\ strength\ in\ bits.
64\ is\ just\ about\ memorable
128\ is\ secure
1024\ is\ the\ maximum
Bits>\ 128
Your\ password\ is:\ JAsJvRcgR\-_veXNfy_sGmQ
Use\ this\ password?
y)\ Yes
n)\ No
y/n>\ y
Remote\ config
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[secret]
remote\ =\ remote:path
filename_encryption\ =\ standard
password\ =\ CfDxopZIXFG0Oo\-ac7dPLWWOHkNJbw
password2\ =\ HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
d)\ Delete\ this\ remote
y/e/d>\ y
\f[]
.fi
.PP
\f[B]Important\f[] The password is stored in the config file is lightly
obscured so it isn\[aq]t immediately obvious what it is.
It is in no way secure unless you use config file encryption.
.PP
A long passphrase is recommended, or you can use a random one.
Note that if you reconfigure rclone with the same passwords/passphrases
elsewhere it will be compatible \- all the secrets used are derived from
those two passwords/passphrases.
.PP
Note that rclone does not encrypt * file length \- this can be calcuated
within 16 bytes * modification time \- used for syncing
.SS Example
.PP
To test I made a little directory of files using "standard" file name
encryption.
.IP
.nf
\f[C]
plaintext/
├──\ file0.txt
├──\ file1.txt
└──\ subdir
\ \ \ \ ├──\ file2.txt
\ \ \ \ ├──\ file3.txt
\ \ \ \ └──\ subsubdir
\ \ \ \ \ \ \ \ └──\ file4.txt
\f[]
.fi
.PP
Copy these to the remote and list them back
.IP
.nf
\f[C]
$\ rclone\ \-q\ copy\ plaintext\ secret:
$\ rclone\ \-q\ ls\ secret:
\ \ \ \ \ \ \ \ 7\ file1.txt
\ \ \ \ \ \ \ \ 6\ file0.txt
\ \ \ \ \ \ \ \ 8\ subdir/file2.txt
\ \ \ \ \ \ \ 10\ subdir/subsubdir/file4.txt
\ \ \ \ \ \ \ \ 9\ subdir/file3.txt
\f[]
.fi
.PP
Now see what that looked like when encrypted
.IP
.nf
\f[C]
$\ rclone\ \-q\ ls\ remote:path
\ \ \ \ \ \ \ 55\ hagjclgavj2mbiqm6u6cnjjqcg
\ \ \ \ \ \ \ 54\ v05749mltvv1tf4onltun46gls
\ \ \ \ \ \ \ 57\ 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
\ \ \ \ \ \ \ 58\ 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
\ \ \ \ \ \ \ 56\ 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
\f[]
.fi
.PP
Note that this retains the directory structure which means you can do
this
.IP
.nf
\f[C]
$\ rclone\ \-q\ ls\ secret:subdir
\ \ \ \ \ \ \ \ 8\ file2.txt
\ \ \ \ \ \ \ \ 9\ file3.txt
\ \ \ \ \ \ \ 10\ subsubdir/file4.txt
\f[]
.fi
.PP
If don\[aq]t use file name encryption then the remote will look like
this \- note the \f[C]\&.bin\f[] extensions added to prevent the cloud
provider attempting to interpret the data.
.IP
.nf
\f[C]
$\ rclone\ \-q\ ls\ remote:path
\ \ \ \ \ \ \ 54\ file0.txt.bin
\ \ \ \ \ \ \ 57\ subdir/file3.txt.bin
\ \ \ \ \ \ \ 56\ subdir/file2.txt.bin
\ \ \ \ \ \ \ 58\ subdir/subsubdir/file4.txt.bin
\ \ \ \ \ \ \ 55\ file1.txt.bin
\f[]
.fi
.SS File name encryption modes
.PP
Here are some of the features of the file name encryption modes
.PP
Off * doesn\[aq]t hide file names or directory structure * allows for
longer file names (~246 characters) * can use sub paths and copy single
files
.PP
Standard * file names encrypted * file names can\[aq]t be as long (~156
characters) * can use sub paths and copy single files * directory
structure visibile * identical files names will have identical uploaded
names * can use shortcuts to shorten the directory recursion
.PP
Cloud storage systems have various limits on file name length and total
path length which you are more likely to hit using "Standard" file name
encryption.
If you keep your file names to below 156 characters in length then you
should be OK on all providers.
.PP
There may be an even more secure file name encryption mode in the future
which will address the long file name problem.
.SS File formats
.SS File encryption
.PP
Files are encrypted 1:1 source file to destination object.
The file has a header and is divided into chunks.
.SS Header
.IP \[bu] 2
8 bytes magic string \f[C]RCLONE\\x00\\x00\f[]
.IP \[bu] 2
24 bytes Nonce (IV)
.PP
The initial nonce is generated from the operating systems crypto strong
random number genrator.
The nonce is incremented for each chunk read making sure each nonce is
unique for each block written.
The chance of a nonce being re\-used is miniscule.
If you wrote an exabyte of data (10¹⁸ bytes) you would have a
probability of approximately 2×10⁻³² of re\-using a nonce.
.SS Chunk
.PP
Each chunk will contain 64kB of data, except for the last one which may
have less data.
The data chunk is in standard NACL secretbox format.
Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate
messages.
.PP
Each chunk contains:
.IP \[bu] 2
16 Bytes of Poly1305 authenticator
.IP \[bu] 2
1 \- 65536 bytes XSalsa20 encrypted data
.PP
64k chunk size was chosen as the best performing chunk size (the
authenticator takes too much time below this and the performance drops
off due to cache effects above this).
Note that these chunks are buffered in memory so they can\[aq]t be too
big.
.PP
This uses a 32 byte (256 bit key) key derived from the user password.
.SS Examples
.PP
1 byte file will encrypt to
.IP \[bu] 2
32 bytes header
.IP \[bu] 2
17 bytes data chunk
.PP
49 bytes total
.PP
1MB (1048576 bytes) file will encrypt to
.IP \[bu] 2
32 bytes header
.IP \[bu] 2
16 chunks of 65568 bytes
.PP
1049120 bytes total (a 0.05% overhead).
This is the overhead for big files.
.SS Name encryption
.PP
File names are encrypted segment by segment \- the path is broken up
into \f[C]/\f[] separated strings and these are encrypted individually.
.PP
File segments are padded using using PKCS#7 to a multiple of 16 bytes
before encryption.
.PP
They are then encrypted with EME using AES with 256 bit key.
EME (ECB\-Mix\-ECB) is a wide\-block encryption mode presented in the
2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
.PP
This makes for determinstic encryption which is what we want \- the same
filename must encrypt to the same thing otherwise we can\[aq]t find it
on the cloud storage system.
.PP
This means that
.IP \[bu] 2
filenames with the same name will encrypt the same
.IP \[bu] 2
filenames which start the same won\[aq]t have a common prefix
.PP
This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of
which are derived from the user password.
.PP
After encryption they are written out using a modified version of
standard \f[C]base32\f[] encoding as described in RFC4648.
The standard encoding is modified in two ways:
.IP \[bu] 2
it becomes lower case (no\-one likes upper case filenames!)
.IP \[bu] 2
we strip the padding character \f[C]=\f[]
.PP
\f[C]base32\f[] is used rather than the more efficient \f[C]base64\f[]
so rclone can be used on case insensitive remotes (eg Windows, Amazon
Drive).
.SS Key derivation
.PP
Rclone uses \f[C]scrypt\f[] with parameters \f[C]N=16384,\ r=8,\ p=1\f[]
with a an optional user supplied salt (password2) to derive the 32+32+16
= 80 bytes of key material required.
If the user doesn\[aq]t supply a salt then rclone uses an internal one.
.PP
\f[C]scrypt\f[] makes it impractical to mount a dictionary attack on
rclone encrypted data.
For full protection agains this you should always use a salt.
.SS Local Filesystem
.PP
Local paths are specified as normal filesystem paths, eg
@ -4272,6 +4783,76 @@ Of course this will cause problems if the absolute path length of a file
exceeds 258 characters on z, so only use this option if you have to.
.SS Changelog
.IP \[bu] 2
v1.33 \- 2016\-08\-24
.RS 2
.IP \[bu] 2
New Features
.IP \[bu] 2
Implement encryption
.RS 2
.IP \[bu] 2
data encrypted in NACL secretbox format
.IP \[bu] 2
with optional file name encryption
.RE
.IP \[bu] 2
New commands
.RS 2
.IP \[bu] 2
rclone mount \- implements FUSE mounting of remotes (EXPERIMENTAL)
.IP \[bu] 2
works on Linux, FreeBSD and OS X (need testers for the last 2!)
.IP \[bu] 2
rclone cat \- outputs remote file or files to the terminal
.IP \[bu] 2
rclone genautocomplete \- command to make a bash completion script for
rclone
.RE
.IP \[bu] 2
Editing a remote using \f[C]rclone\ config\f[] now goes through the
wizard
.IP \[bu] 2
Compile with go 1.7 \- this fixes rclone on macOS Sierra and on 386
processors
.IP \[bu] 2
Use cobra for sub commands and docs generation
.IP \[bu] 2
drive
.IP \[bu] 2
Document how to make your own client_id
.IP \[bu] 2
s3
.IP \[bu] 2
User\-configurable Amazon S3 ACL (thanks Radek Šenfeld)
.IP \[bu] 2
b2
.IP \[bu] 2
Fix stats accounting for upload \- no more jumping to 100% done
.IP \[bu] 2
On cleanup delete hide marker if it is the current file
.IP \[bu] 2
New B2 API endpoint (thanks Per Cederberg)
.IP \[bu] 2
Set maximum backoff to 5 Minutes
.IP \[bu] 2
onedrive
.IP \[bu] 2
Fix URL escaping in file names \- eg uploading files with \f[C]+\f[] in
them.
.IP \[bu] 2
amazon cloud drive
.IP \[bu] 2
Fix token expiry during large uploads
.IP \[bu] 2
Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
.IP \[bu] 2
local
.IP \[bu] 2
Fix filenames with invalid UTF\-8 not being uploaded
.IP \[bu] 2
Fix problem with some UTF\-8 characters on OS X
.RE
.IP \[bu] 2
v1.32 \- 2016\-07\-13
.RS 2
.IP \[bu] 2
@ -5480,6 +6061,10 @@ Antonio Messina <antonio.s.messina@gmail.com>
.IP \[bu] 2
Stefan G.
Weichinger <office@oops.co.at>
.IP \[bu] 2
Per Cederberg <cederberg@gmail.com>
.IP \[bu] 2
Radek Šenfeld <rush@logic.cz>
.SS Contact the rclone project
.PP
The project website is at: