diff --git a/MANUAL.html b/MANUAL.html index 9f2871400..229718084 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@

Rclone

Logo

@@ -102,7 +102,18 @@ sudo mandb

Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive.

You can define as many storage paths as you like in the config file.

Subcommands

-

rclone copy source:path dest:path

+

rclone uses a system of subcommands. For example

+
rclone ls remote:path # lists a re
+rclone copy /local/path remote:path # copies /local/path to the remote
+rclone sync /local/path remote:path # syncs /local/path to the remote
+

rclone config

+

Enter an interactive configuration session.

+

Synopsis

+

Enter an interactive configuration session.

+
rclone config
+

rclone copy

+

Copy files from source to dest, skipping already copied

+

Synopsis

Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.

Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.

If dest:path doesn't exist, it is created and the source:path contents go there.

@@ -119,36 +130,27 @@ destpath/two.txt destpath/sourcepath/two.txt

If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.

See the --no-traverse option for controlling whether rclone lists the destination directory or not.

-

rclone sync source:path dest:path

+
rclone copy source:path dest:path
+

rclone sync

+

Make source and dest identical, modifying destination only.

+

Synopsis

Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.

Important: Since this can cause data loss, test first with the --dry-run flag to see exactly what would be copied and deleted.

Note that files in the destination won't be deleted if there were any errors at any point.

It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure.

If dest:path doesn't exist, it is created and the source:path contents go there.

-

move source:path dest:path

+
rclone sync source:path dest:path
+

rclone move

+

Move files from source to dest.

+

Synopsis

Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap.

If no filters are in use and if possible this will server side move source:path into dest:path. After this source:path will no longer longer exist.

Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path.

Important: Since this can cause data loss, test first with the --dry-run flag.

-

rclone ls remote:path

-

List all the objects in the path with size and path.

-

rclone lsd remote:path

-

List all directories/containers/buckets in the the path.

-

rclone lsl remote:path

-

List all the objects in the the path with modification time, size and path.

-

rclone md5sum remote:path

-

Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.

-

rclone sha1sum remote:path

-

Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.

-

rclone size remote:path

-

Prints the total size of objects in remote:path and the number of objects.

-

rclone mkdir remote:path

-

Make the path if it doesn't already exist

-

rclone rmdir remote:path

-

Remove the path. Note that you can't remove a path with objects in it, use purge for that.

-

rclone purge remote:path

-

Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files.

-

rclone delete remote:path

+
rclone move source:path dest:path
+

rclone delete

+

Remove the contents of path.

+

Synopsis

Remove the contents of path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files.

Eg delete all files bigger than 100MBytes

Check what would be deleted first (use either)

@@ -157,12 +159,71 @@ rclone --dry-run --min-size 100M delete remote:path

Then delete

rclone --min-size 100M delete remote:path

That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.

-

rclone check source:path dest:path

+
rclone delete remote:path
+

rclone purge

+

Remove the path and all of its contents.

+

Synopsis

+

Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files.

+
rclone purge remote:path
+

rclone mkdir

+

Make the path if it doesn't already exist.

+

Synopsis

+

Make the path if it doesn't already exist.

+
rclone mkdir remote:path
+

rclone rmdir

+

Remove the path if empty.

+

Synopsis

+

Remove the path. Note that you can't remove a path with objects in it, use purge for that.

+
rclone rmdir remote:path
+

rclone check

+

Checks the files in the source and destination match.

+

Synopsis

Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.

--size-only may be used to only compare the sizes, not the MD5SUMs.

-

rclone cleanup remote:path

+
rclone check source:path dest:path
+

rclone ls

+

List all the objects in the the path with size and path.

+

Synopsis

+

List all the objects in the the path with size and path.

+
rclone ls remote:path
+

rclone lsd

+

List all directories/containers/buckets in the the path.

+

Synopsis

+

List all directories/containers/buckets in the the path.

+
rclone lsd remote:path
+

rclone lsl

+

List all the objects path with modification time, size and path.

+

Synopsis

+

List all the objects path with modification time, size and path.

+
rclone lsl remote:path
+

rclone md5sum

+

Produces an md5sum file for all the objects in the path.

+

Synopsis

+

Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.

+
rclone md5sum remote:path
+

rclone sha1sum

+

Produces an sha1sum file for all the objects in the path.

+

Synopsis

+

Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.

+
rclone sha1sum remote:path
+

rclone size

+

Prints the total size and number of objects in remote:path.

+

Synopsis

+

Prints the total size and number of objects in remote:path.

+
rclone size remote:path
+

rclone version

+

Show the version number.

+

Synopsis

+

Show the version number.

+
rclone version
+

rclone cleanup

+

Clean up the remote if possible

+

Synopsis

Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.

-

rclone dedupe remote:path

+
rclone cleanup remote:path
+

rclone dedupe

+

Interactively find duplicate files delete/rename them.

+

Synopsis

By default dedup interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.

The dedupe command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive. You can use --dry-run to see what would happen without doing anything.

Here is an example run.

@@ -207,7 +268,7 @@ two-3.txt: renamed from: two.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt -

Dedupe can be run non interactively using the --dedupe-mode flag.

+

Dedupe can be run non interactively using the --dedupe-mode flag or by using an extra parameter with the same value

For example to rename all the identically named photos in your Google Photos directory, do

rclone dedupe --dedupe-mode rename "drive:Google Photos"
-

rclone config

-

Enter an interactive configuration session.

-

rclone help

-

Prints help on rclone commands and options.

+

Or

+
rclone dedupe rename "drive:Google Photos"
+
rclone dedupe [mode] remote:path
+

Options

+
      --dedupe-mode value   Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
+

rclone authorize

+

Remote authorization.

+

Synopsis

+

Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

+
rclone authorize
+

rclone genautocomplete

+

Output bash completion script for rclone.

+

Synopsis

+

Generates a bash shell autocompletion script for rclone.

+

This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg

+
sudo rclone genautocomplete
+

Logout and login again to use the autocompletion scripts, or source them directly

+
. /etc/bash_completion
+

If you supply a command line argument the script will be written there.

+
rclone genautocomplete [output_file]
+

rclone gendocs

+

Output markdown docs for rclone to the directory supplied.

+

Synopsis

+

This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

+
rclone gendocs output_directory

Copying single files

rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.

For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this

@@ -258,7 +340,7 @@ two-3.txt: renamed from: two.txt

This can be used when scripting to make aged backups efficiently, eg

rclone sync remote:current-backup remote:previous-backup
 rclone sync /path/to/files remote:current-backup
-

Options

+

Options

Rclone has a number of options to control its behaviour.

Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

Options which use SIZE use kByte by default. However a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

diff --git a/MANUAL.md b/MANUAL.md index 6ebdf71e0..d92ae1243 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Jul 13, 2016 +% Aug 04, 2016 Rclone ====== @@ -134,7 +134,32 @@ You can define as many storage paths as you like in the config file. Subcommands ----------- -### rclone copy source:path dest:path ### +rclone uses a system of subcommands. For example + + rclone ls remote:path # lists a re + rclone copy /local/path remote:path # copies /local/path to the remote + rclone sync /local/path remote:path # syncs /local/path to the remote + +## rclone config + +Enter an interactive configuration session. + +### Synopsis + + +Enter an interactive configuration session. + +``` +rclone config +``` + +## rclone copy + +Copy files from source to dest, skipping already copied + +### Synopsis + + Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or @@ -175,7 +200,18 @@ source or destination. See the `--no-traverse` option for controlling whether rclone lists the destination directory or not. -### rclone sync source:path dest:path ### + +``` +rclone copy source:path dest:path +``` + +## rclone sync + +Make source and dest identical, modifying destination only. + +### Synopsis + + Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and @@ -196,7 +232,18 @@ extended explanation in the `copy` command above if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. -### move source:path dest:path ### + +``` +rclone sync source:path dest:path +``` + +## rclone move + +Move files from source to dest. + +### Synopsis + + Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap. @@ -214,50 +261,18 @@ into `dest:path` then delete the original (if no errors on copy) in **Important**: Since this can cause data loss, test first with the --dry-run flag. -### rclone ls remote:path ### -List all the objects in the path with size and path. +``` +rclone move source:path dest:path +``` -### rclone lsd remote:path ### +## rclone delete -List all directories/containers/buckets in the the path. +Remove the contents of path. -### rclone lsl remote:path ### +### Synopsis -List all the objects in the the path with modification time, -size and path. -### rclone md5sum remote:path ### - -Produces an md5sum file for all the objects in the path. This -is in the same format as the standard md5sum tool produces. - -### rclone sha1sum remote:path ### - -Produces an sha1sum file for all the objects in the path. This -is in the same format as the standard sha1sum tool produces. - -### rclone size remote:path ### - -Prints the total size of objects in remote:path and the number of -objects. - -### rclone mkdir remote:path ### - -Make the path if it doesn't already exist - -### rclone rmdir remote:path ### - -Remove the path. Note that you can't remove a path with -objects in it, use purge for that. - -### rclone purge remote:path ### - -Remove the path and all of its contents. Note that this does not obey -include/exclude filters - everything will be removed. Use `delete` if -you want to selectively delete files. - -### rclone delete remote:path ### Remove the contents of path. Unlike `purge` it obeys include/exclude filters so can be used to selectively delete files. @@ -276,7 +291,63 @@ Then delete That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. -### rclone check source:path dest:path ### + +``` +rclone delete remote:path +``` + +## rclone purge + +Remove the path and all of its contents. + +### Synopsis + + + +Remove the path and all of its contents. Note that this does not obey +include/exclude filters - everything will be removed. Use `delete` if +you want to selectively delete files. + + +``` +rclone purge remote:path +``` + +## rclone mkdir + +Make the path if it doesn't already exist. + +### Synopsis + + +Make the path if it doesn't already exist. + +``` +rclone mkdir remote:path +``` + +## rclone rmdir + +Remove the path if empty. + +### Synopsis + + + +Remove the path. Note that you can't remove a path with +objects in it, use purge for that. + +``` +rclone rmdir remote:path +``` + +## rclone check + +Checks the files in the source and destination match. + +### Synopsis + + Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which @@ -284,12 +355,131 @@ don't match. It doesn't alter the source or destination. `--size-only` may be used to only compare the sizes, not the MD5SUMs. -### rclone cleanup remote:path ### + +``` +rclone check source:path dest:path +``` + +## rclone ls + +List all the objects in the the path with size and path. + +### Synopsis + + +List all the objects in the the path with size and path. + +``` +rclone ls remote:path +``` + +## rclone lsd + +List all directories/containers/buckets in the the path. + +### Synopsis + + +List all directories/containers/buckets in the the path. + +``` +rclone lsd remote:path +``` + +## rclone lsl + +List all the objects path with modification time, size and path. + +### Synopsis + + +List all the objects path with modification time, size and path. + +``` +rclone lsl remote:path +``` + +## rclone md5sum + +Produces an md5sum file for all the objects in the path. + +### Synopsis + + + +Produces an md5sum file for all the objects in the path. This +is in the same format as the standard md5sum tool produces. + + +``` +rclone md5sum remote:path +``` + +## rclone sha1sum + +Produces an sha1sum file for all the objects in the path. + +### Synopsis + + + +Produces an sha1sum file for all the objects in the path. This +is in the same format as the standard sha1sum tool produces. + + +``` +rclone sha1sum remote:path +``` + +## rclone size + +Prints the total size and number of objects in remote:path. + +### Synopsis + + +Prints the total size and number of objects in remote:path. + +``` +rclone size remote:path +``` + +## rclone version + +Show the version number. + +### Synopsis + + +Show the version number. + +``` +rclone version +``` + +## rclone cleanup + +Clean up the remote if possible + +### Synopsis + + Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. -### rclone dedupe remote:path ### + +``` +rclone cleanup remote:path +``` + +## rclone dedupe + +Interactively find duplicate files delete/rename them. + +### Synopsis + + By default `dedup` interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with @@ -304,58 +494,52 @@ Here is an example run. Before - with duplicates -``` -$ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 6048320 2016-03-05 16:23:11.775000000 one.txt - 564374 2016-03-05 16:23:06.731000000 one.txt - 6048320 2016-03-05 16:18:26.092000000 one.txt - 6048320 2016-03-05 16:22:46.185000000 two.txt - 1744073 2016-03-05 16:22:38.104000000 two.txt - 564374 2016-03-05 16:22:52.118000000 two.txt -``` + $ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 6048320 2016-03-05 16:23:11.775000000 one.txt + 564374 2016-03-05 16:23:06.731000000 one.txt + 6048320 2016-03-05 16:18:26.092000000 one.txt + 6048320 2016-03-05 16:22:46.185000000 two.txt + 1744073 2016-03-05 16:22:38.104000000 two.txt + 564374 2016-03-05 16:22:52.118000000 two.txt Now the `dedupe` session -``` -$ rclone dedupe drive:dupes -2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. -one.txt: Found 4 duplicates - deleting identical copies -one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36") -one.txt: 2 duplicates remain - 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 - 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 -s) Skip and do nothing -k) Keep just one (choose which in next step) -r) Rename all to be different (by changing file.jpg to file-1.jpg) -s/k/r> k -Enter the number of the file to keep> 1 -one.txt: Deleted 1 extra copies -two.txt: Found 3 duplicates - deleting identical copies -two.txt: 3 duplicates remain - 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 - 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 - 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802 -s) Skip and do nothing -k) Keep just one (choose which in next step) -r) Rename all to be different (by changing file.jpg to file-1.jpg) -s/k/r> r -two-1.txt: renamed from: two.txt -two-2.txt: renamed from: two.txt -two-3.txt: renamed from: two.txt -``` + $ rclone dedupe drive:dupes + 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. + one.txt: Found 4 duplicates - deleting identical copies + one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36") + one.txt: 2 duplicates remain + 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 + 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 + s) Skip and do nothing + k) Keep just one (choose which in next step) + r) Rename all to be different (by changing file.jpg to file-1.jpg) + s/k/r> k + Enter the number of the file to keep> 1 + one.txt: Deleted 1 extra copies + two.txt: Found 3 duplicates - deleting identical copies + two.txt: 3 duplicates remain + 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 + 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 + 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802 + s) Skip and do nothing + k) Keep just one (choose which in next step) + r) Rename all to be different (by changing file.jpg to file-1.jpg) + s/k/r> r + two-1.txt: renamed from: two.txt + two-2.txt: renamed from: two.txt + two-3.txt: renamed from: two.txt The result being -``` -$ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 564374 2016-03-05 16:22:52.118000000 two-1.txt - 6048320 2016-03-05 16:22:46.185000000 two-2.txt - 1744073 2016-03-05 16:22:38.104000000 two-3.txt -``` + $ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 564374 2016-03-05 16:22:52.118000000 two-1.txt + 6048320 2016-03-05 16:22:46.185000000 two-2.txt + 1744073 2016-03-05 16:22:38.104000000 two-3.txt -Dedupe can be run non interactively using the `--dedupe-mode` flag. +Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value * `--dedupe-mode interactive` - interactive as above. * `--dedupe-mode skip` - removes identical files then skips anything left. @@ -368,13 +552,81 @@ For example to rename all the identically named photos in your Google Photos dir rclone dedupe --dedupe-mode rename "drive:Google Photos" -### rclone config ### +Or -Enter an interactive configuration session. + rclone dedupe rename "drive:Google Photos" -### rclone help ### -Prints help on rclone commands and options. +``` +rclone dedupe [mode] remote:path +``` + +### Options + +``` + --dedupe-mode value Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive") +``` + +## rclone authorize + +Remote authorization. + +### Synopsis + + + +Remote authorization. Used to authorize a remote or headless +rclone from a machine with a browser - use as instructed by +rclone config. + +``` +rclone authorize +``` + +## rclone genautocomplete + +Output bash completion script for rclone. + +### Synopsis + + + +Generates a bash shell autocompletion script for rclone. + +This writes to /etc/bash_completion.d/rclone by default so will +probably need to be run with sudo or as root, eg + + sudo rclone genautocomplete + +Logout and login again to use the autocompletion scripts, or source +them directly + + . /etc/bash_completion + +If you supply a command line argument the script will be written +there. + + +``` +rclone genautocomplete [output_file] +``` + +## rclone gendocs + +Output markdown docs for rclone to the directory supplied. + +### Synopsis + + + +This produces markdown docs for the rclone commands to the directory +supplied. These are in a format suitable for hugo to render into the +rclone.org website. + +``` +rclone gendocs output_directory +``` + Copying single files -------------------- diff --git a/MANUAL.txt b/MANUAL.txt index 0f4b620a6..f29301b75 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Jul 13, 2016 +Aug 04, 2016 @@ -134,7 +134,29 @@ You can define as many storage paths as you like in the config file. Subcommands -rclone copy source:path dest:path +rclone uses a system of subcommands. For example + + rclone ls remote:path # lists a re + rclone copy /local/path remote:path # copies /local/path to the remote + rclone sync /local/path remote:path # syncs /local/path to the remote + + +rclone config + +Enter an interactive configuration session. + +Synopsis + +Enter an interactive configuration session. + + rclone config + + +rclone copy + +Copy files from source to dest, skipping already copied + +Synopsis Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files @@ -174,7 +196,14 @@ source or destination. See the --no-traverse option for controlling whether rclone lists the destination directory or not. -rclone sync source:path dest:path + rclone copy source:path dest:path + + +rclone sync + +Make source and dest identical, modifying destination only. + +Synopsis Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time @@ -195,7 +224,14 @@ extended explanation in the copy command above if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. -move source:path dest:path + rclone sync source:path dest:path + + +rclone move + +Move files from source to dest. + +Synopsis Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap. @@ -212,50 +248,14 @@ then delete the original (if no errors on copy) in source:path. IMPORTANT: Since this can cause data loss, test first with the --dry-run flag. -rclone ls remote:path + rclone move source:path dest:path -List all the objects in the path with size and path. -rclone lsd remote:path +rclone delete -List all directories/containers/buckets in the the path. +Remove the contents of path. -rclone lsl remote:path - -List all the objects in the the path with modification time, size and -path. - -rclone md5sum remote:path - -Produces an md5sum file for all the objects in the path. This is in the -same format as the standard md5sum tool produces. - -rclone sha1sum remote:path - -Produces an sha1sum file for all the objects in the path. This is in the -same format as the standard sha1sum tool produces. - -rclone size remote:path - -Prints the total size of objects in remote:path and the number of -objects. - -rclone mkdir remote:path - -Make the path if it doesn't already exist - -rclone rmdir remote:path - -Remove the path. Note that you can't remove a path with objects in it, -use purge for that. - -rclone purge remote:path - -Remove the path and all of its contents. Note that this does not obey -include/exclude filters - everything will be removed. Use delete if you -want to selectively delete files. - -rclone delete remote:path +Synopsis Remove the contents of path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files. @@ -274,7 +274,50 @@ Then delete That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. -rclone check source:path dest:path + rclone delete remote:path + + +rclone purge + +Remove the path and all of its contents. + +Synopsis + +Remove the path and all of its contents. Note that this does not obey +include/exclude filters - everything will be removed. Use delete if you +want to selectively delete files. + + rclone purge remote:path + + +rclone mkdir + +Make the path if it doesn't already exist. + +Synopsis + +Make the path if it doesn't already exist. + + rclone mkdir remote:path + + +rclone rmdir + +Remove the path if empty. + +Synopsis + +Remove the path. Note that you can't remove a path with objects in it, +use purge for that. + + rclone rmdir remote:path + + +rclone check + +Checks the files in the source and destination match. + +Synopsis Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't @@ -282,12 +325,105 @@ alter the source or destination. --size-only may be used to only compare the sizes, not the MD5SUMs. -rclone cleanup remote:path + rclone check source:path dest:path + + +rclone ls + +List all the objects in the the path with size and path. + +Synopsis + +List all the objects in the the path with size and path. + + rclone ls remote:path + + +rclone lsd + +List all directories/containers/buckets in the the path. + +Synopsis + +List all directories/containers/buckets in the the path. + + rclone lsd remote:path + + +rclone lsl + +List all the objects path with modification time, size and path. + +Synopsis + +List all the objects path with modification time, size and path. + + rclone lsl remote:path + + +rclone md5sum + +Produces an md5sum file for all the objects in the path. + +Synopsis + +Produces an md5sum file for all the objects in the path. This is in the +same format as the standard md5sum tool produces. + + rclone md5sum remote:path + + +rclone sha1sum + +Produces an sha1sum file for all the objects in the path. + +Synopsis + +Produces an sha1sum file for all the objects in the path. This is in the +same format as the standard sha1sum tool produces. + + rclone sha1sum remote:path + + +rclone size + +Prints the total size and number of objects in remote:path. + +Synopsis + +Prints the total size and number of objects in remote:path. + + rclone size remote:path + + +rclone version + +Show the version number. + +Synopsis + +Show the version number. + + rclone version + + +rclone cleanup + +Clean up the remote if possible + +Synopsis Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. -rclone dedupe remote:path + rclone cleanup remote:path + + +rclone dedupe + +Interactively find duplicate files delete/rename them. + +Synopsis By default dedup interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with @@ -347,7 +483,8 @@ The result being 6048320 2016-03-05 16:22:46.185000000 two-2.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt -Dedupe can be run non interactively using the --dedupe-mode flag. +Dedupe can be run non interactively using the --dedupe-mode flag or by +using an extra parameter with the same value - --dedupe-mode interactive - interactive as above. - --dedupe-mode skip - removes identical files then skips @@ -366,13 +503,63 @@ Photos directory, do rclone dedupe --dedupe-mode rename "drive:Google Photos" -rclone config +Or -Enter an interactive configuration session. + rclone dedupe rename "drive:Google Photos" -rclone help + rclone dedupe [mode] remote:path -Prints help on rclone commands and options. +Options + + --dedupe-mode value Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive") + + +rclone authorize + +Remote authorization. + +Synopsis + +Remote authorization. Used to authorize a remote or headless rclone from +a machine with a browser - use as instructed by rclone config. + + rclone authorize + + +rclone genautocomplete + +Output bash completion script for rclone. + +Synopsis + +Generates a bash shell autocompletion script for rclone. + +This writes to /etc/bash_completion.d/rclone by default so will probably +need to be run with sudo or as root, eg + + sudo rclone genautocomplete + +Logout and login again to use the autocompletion scripts, or source them +directly + + . /etc/bash_completion + +If you supply a command line argument the script will be written there. + + rclone genautocomplete [output_file] + + +rclone gendocs + +Output markdown docs for rclone to the directory supplied. + +Synopsis + +This produces markdown docs for the rclone commands to the directory +supplied. These are in a format suitable for hugo to render into the +rclone.org website. + + rclone gendocs output_directory Copying single files diff --git a/Makefile b/Makefile index 678ef851d..d38512565 100644 --- a/Makefile +++ b/Makefile @@ -40,7 +40,7 @@ doc: rclone.1 MANUAL.html MANUAL.txt rclone.1: MANUAL.md pandoc -s --from markdown --to man MANUAL.md -o rclone.1 -MANUAL.md: make_manual.py docs/content/*.md +MANUAL.md: make_manual.py docs/content/*.md commanddocs ./make_manual.py MANUAL.html: MANUAL.md @@ -49,6 +49,9 @@ MANUAL.html: MANUAL.md MANUAL.txt: MANUAL.md pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt +commanddocs: rclone + rclone gendocs docs/content/commands/ + install: rclone install -d ${DESTDIR}/usr/bin install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md new file mode 100644 index 000000000..df38254af --- /dev/null +++ b/docs/content/commands/rclone.md @@ -0,0 +1,140 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone" +slug: rclone +url: /commands/rclone/ +--- +## rclone + +Sync files and directories to and from local and remote object stores - v1.32 + +### Synopsis + + + +Rclone is a command line program to sync files and directories to and +from various cloud storage systems, such as: + + * Google Drive + * Amazon S3 + * Openstack Swift / Rackspace cloud files / Memset Memstore + * Dropbox + * Google Cloud Storage + * Amazon Drive + * Microsoft One Drive + * Hubic + * Backblaze B2 + * Yandex Disk + * The local filesystem + +Features + + * MD5/SHA1 hashes checked at all times for file integrity + * Timestamps preserved on files + * Partial syncs supported on a whole file basis + * Copy mode to just copy new/changed files + * Sync (one way) mode to make a directory identical + * Check mode to check for file hash equality + * Can sync to and from network, eg two different cloud accounts + +See the home page for installation, usage, documentation, changelog +and configuration walkthroughs. + + * http://rclone.org/ + + +``` +rclone +``` + +### Options + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff + -V, --version Print the version number +``` + +### SEE ALSO +* [rclone authorize](/commands/rclone_authorize/) - Remote authorization. +* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match. +* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible +* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. +* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied +* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them. +* [rclone delete](/commands/rclone_delete/) - Remove the contents of path. +* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone. +* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied. +* [rclone ls](/commands/rclone_ls/) - List all the objects in the the path with size and path. +* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the the path. +* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path. +* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path. +* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist. +* [rclone move](/commands/rclone_move/) - Move files from source to dest. +* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. +* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty. +* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path. +* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path. +* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. +* [rclone version](/commands/rclone_version/) - Show the version number. + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md new file mode 100644 index 000000000..afa169e8e --- /dev/null +++ b/docs/content/commands/rclone_authorize.md @@ -0,0 +1,92 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone authorize" +slug: rclone_authorize +url: /commands/rclone_authorize/ +--- +## rclone authorize + +Remote authorization. + +### Synopsis + + + +Remote authorization. Used to authorize a remote or headless +rclone from a machine with a browser - use as instructed by +rclone config. + +``` +rclone authorize +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md new file mode 100644 index 000000000..793cee468 --- /dev/null +++ b/docs/content/commands/rclone_check.md @@ -0,0 +1,95 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone check" +slug: rclone_check +url: /commands/rclone_check/ +--- +## rclone check + +Checks the files in the source and destination match. + +### Synopsis + + + +Checks the files in the source and destination match. It +compares sizes and MD5SUMs and prints a report of files which +don't match. It doesn't alter the source or destination. + +`--size-only` may be used to only compare the sizes, not the MD5SUMs. + + +``` +rclone check source:path dest:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md new file mode 100644 index 000000000..3744e4a8c --- /dev/null +++ b/docs/content/commands/rclone_cleanup.md @@ -0,0 +1,92 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone cleanup" +slug: rclone_cleanup +url: /commands/rclone_cleanup/ +--- +## rclone cleanup + +Clean up the remote if possible + +### Synopsis + + + +Clean up the remote if possible. Empty the trash or delete old file +versions. Not supported by all remotes. + + +``` +rclone cleanup remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md new file mode 100644 index 000000000..eab1c7383 --- /dev/null +++ b/docs/content/commands/rclone_config.md @@ -0,0 +1,89 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone config" +slug: rclone_config +url: /commands/rclone_config/ +--- +## rclone config + +Enter an interactive configuration session. + +### Synopsis + + +Enter an interactive configuration session. + +``` +rclone config +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md new file mode 100644 index 000000000..efc517b9e --- /dev/null +++ b/docs/content/commands/rclone_copy.md @@ -0,0 +1,128 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone copy" +slug: rclone_copy +url: /commands/rclone_copy/ +--- +## rclone copy + +Copy files from source to dest, skipping already copied + +### Synopsis + + + +Copy the source to the destination. Doesn't transfer +unchanged files, testing by size and modification time or +MD5SUM. Doesn't delete files from the destination. + +Note that it is always the contents of the directory that is synced, +not the directory so when source:path is a directory, it's the +contents of source:path that are copied, not the directory name and +contents. + +If dest:path doesn't exist, it is created and the source:path contents +go there. + +For example + + rclone copy source:sourcepath dest:destpath + +Let's say there are two files in sourcepath + + sourcepath/one.txt + sourcepath/two.txt + +This copies them to + + destpath/one.txt + destpath/two.txt + +Not to + + destpath/sourcepath/one.txt + destpath/sourcepath/two.txt + +If you are familiar with `rsync`, rclone always works as if you had +written a trailing / - meaning "copy the contents of this directory". +This applies to all commands and whether you are talking about the +source or destination. + +See the `--no-traverse` option for controlling whether rclone lists +the destination directory or not. + + +``` +rclone copy source:path dest:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md new file mode 100644 index 000000000..2ca94170a --- /dev/null +++ b/docs/content/commands/rclone_dedupe.md @@ -0,0 +1,170 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone dedupe" +slug: rclone_dedupe +url: /commands/rclone_dedupe/ +--- +## rclone dedupe + +Interactively find duplicate files delete/rename them. + +### Synopsis + + + +By default `dedup` interactively finds duplicate files and offers to +delete all but one or rename them to be different. Only useful with +Google Drive which can have duplicate file names. + +The `dedupe` command will delete all but one of any identical (same +md5sum) files it finds without confirmation. This means that for most +duplicated files the `dedupe` command will not be interactive. You +can use `--dry-run` to see what would happen without doing anything. + +Here is an example run. + +Before - with duplicates + + $ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 6048320 2016-03-05 16:23:11.775000000 one.txt + 564374 2016-03-05 16:23:06.731000000 one.txt + 6048320 2016-03-05 16:18:26.092000000 one.txt + 6048320 2016-03-05 16:22:46.185000000 two.txt + 1744073 2016-03-05 16:22:38.104000000 two.txt + 564374 2016-03-05 16:22:52.118000000 two.txt + +Now the `dedupe` session + + $ rclone dedupe drive:dupes + 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. + one.txt: Found 4 duplicates - deleting identical copies + one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36") + one.txt: 2 duplicates remain + 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 + 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 + s) Skip and do nothing + k) Keep just one (choose which in next step) + r) Rename all to be different (by changing file.jpg to file-1.jpg) + s/k/r> k + Enter the number of the file to keep> 1 + one.txt: Deleted 1 extra copies + two.txt: Found 3 duplicates - deleting identical copies + two.txt: 3 duplicates remain + 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 + 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 + 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802 + s) Skip and do nothing + k) Keep just one (choose which in next step) + r) Rename all to be different (by changing file.jpg to file-1.jpg) + s/k/r> r + two-1.txt: renamed from: two.txt + two-2.txt: renamed from: two.txt + two-3.txt: renamed from: two.txt + +The result being + + $ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 564374 2016-03-05 16:22:52.118000000 two-1.txt + 6048320 2016-03-05 16:22:46.185000000 two-2.txt + 1744073 2016-03-05 16:22:38.104000000 two-3.txt + +Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value + + * `--dedupe-mode interactive` - interactive as above. + * `--dedupe-mode skip` - removes identical files then skips anything left. + * `--dedupe-mode first` - removes identical files then keeps the first one. + * `--dedupe-mode newest` - removes identical files then keeps the newest one. + * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. + * `--dedupe-mode rename` - removes identical files then renames the rest to be different. + +For example to rename all the identically named photos in your Google Photos directory, do + + rclone dedupe --dedupe-mode rename "drive:Google Photos" + +Or + + rclone dedupe rename "drive:Google Photos" + + +``` +rclone dedupe [mode] remote:path +``` + +### Options + +``` + --dedupe-mode value Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive") +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md new file mode 100644 index 000000000..529621a36 --- /dev/null +++ b/docs/content/commands/rclone_delete.md @@ -0,0 +1,106 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone delete" +slug: rclone_delete +url: /commands/rclone_delete/ +--- +## rclone delete + +Remove the contents of path. + +### Synopsis + + + +Remove the contents of path. Unlike `purge` it obeys include/exclude +filters so can be used to selectively delete files. + +Eg delete all files bigger than 100MBytes + +Check what would be deleted first (use either) + + rclone --min-size 100M lsl remote:path + rclone --dry-run --min-size 100M delete remote:path + +Then delete + + rclone --min-size 100M delete remote:path + +That reads "delete everything with a minimum size of 100 MB", hence +delete all files bigger than 100MBytes. + + +``` +rclone delete remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md new file mode 100644 index 000000000..d2ac47aa8 --- /dev/null +++ b/docs/content/commands/rclone_genautocomplete.md @@ -0,0 +1,104 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone genautocomplete" +slug: rclone_genautocomplete +url: /commands/rclone_genautocomplete/ +--- +## rclone genautocomplete + +Output bash completion script for rclone. + +### Synopsis + + + +Generates a bash shell autocompletion script for rclone. + +This writes to /etc/bash_completion.d/rclone by default so will +probably need to be run with sudo or as root, eg + + sudo rclone genautocomplete + +Logout and login again to use the autocompletion scripts, or source +them directly + + . /etc/bash_completion + +If you supply a command line argument the script will be written +there. + + +``` +rclone genautocomplete [output_file] +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md new file mode 100644 index 000000000..092456f9b --- /dev/null +++ b/docs/content/commands/rclone_gendocs.md @@ -0,0 +1,92 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone gendocs" +slug: rclone_gendocs +url: /commands/rclone_gendocs/ +--- +## rclone gendocs + +Output markdown docs for rclone to the directory supplied. + +### Synopsis + + + +This produces markdown docs for the rclone commands to the directory +supplied. These are in a format suitable for hugo to render into the +rclone.org website. + +``` +rclone gendocs output_directory +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md new file mode 100644 index 000000000..dbf54a04b --- /dev/null +++ b/docs/content/commands/rclone_ls.md @@ -0,0 +1,89 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone ls" +slug: rclone_ls +url: /commands/rclone_ls/ +--- +## rclone ls + +List all the objects in the the path with size and path. + +### Synopsis + + +List all the objects in the the path with size and path. + +``` +rclone ls remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md new file mode 100644 index 000000000..c448f87cc --- /dev/null +++ b/docs/content/commands/rclone_lsd.md @@ -0,0 +1,89 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone lsd" +slug: rclone_lsd +url: /commands/rclone_lsd/ +--- +## rclone lsd + +List all directories/containers/buckets in the the path. + +### Synopsis + + +List all directories/containers/buckets in the the path. + +``` +rclone lsd remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md new file mode 100644 index 000000000..ccc2efdda --- /dev/null +++ b/docs/content/commands/rclone_lsl.md @@ -0,0 +1,89 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone lsl" +slug: rclone_lsl +url: /commands/rclone_lsl/ +--- +## rclone lsl + +List all the objects path with modification time, size and path. + +### Synopsis + + +List all the objects path with modification time, size and path. + +``` +rclone lsl remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md new file mode 100644 index 000000000..353422a48 --- /dev/null +++ b/docs/content/commands/rclone_md5sum.md @@ -0,0 +1,92 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone md5sum" +slug: rclone_md5sum +url: /commands/rclone_md5sum/ +--- +## rclone md5sum + +Produces an md5sum file for all the objects in the path. + +### Synopsis + + + +Produces an md5sum file for all the objects in the path. This +is in the same format as the standard md5sum tool produces. + + +``` +rclone md5sum remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md new file mode 100644 index 000000000..9b5c4bf93 --- /dev/null +++ b/docs/content/commands/rclone_mkdir.md @@ -0,0 +1,89 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone mkdir" +slug: rclone_mkdir +url: /commands/rclone_mkdir/ +--- +## rclone mkdir + +Make the path if it doesn't already exist. + +### Synopsis + + +Make the path if it doesn't already exist. + +``` +rclone mkdir remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md new file mode 100644 index 000000000..3c18bbf5f --- /dev/null +++ b/docs/content/commands/rclone_move.md @@ -0,0 +1,105 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone move" +slug: rclone_move +url: /commands/rclone_move/ +--- +## rclone move + +Move files from source to dest. + +### Synopsis + + + +Moves the contents of the source directory to the destination +directory. Rclone will error if the source and destination overlap. + +If no filters are in use and if possible this will server side move +`source:path` into `dest:path`. After this `source:path` will no +longer longer exist. + +Otherwise for each file in `source:path` selected by the filters (if +any) this will move it into `dest:path`. If possible a server side +move will be used, otherwise it will copy it (server side if possible) +into `dest:path` then delete the original (if no errors on copy) in +`source:path`. + +**Important**: Since this can cause data loss, test first with the +--dry-run flag. + + +``` +rclone move source:path dest:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md new file mode 100644 index 000000000..78cdc66a8 --- /dev/null +++ b/docs/content/commands/rclone_purge.md @@ -0,0 +1,93 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone purge" +slug: rclone_purge +url: /commands/rclone_purge/ +--- +## rclone purge + +Remove the path and all of its contents. + +### Synopsis + + + +Remove the path and all of its contents. Note that this does not obey +include/exclude filters - everything will be removed. Use `delete` if +you want to selectively delete files. + + +``` +rclone purge remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md new file mode 100644 index 000000000..8c913f3a1 --- /dev/null +++ b/docs/content/commands/rclone_rmdir.md @@ -0,0 +1,91 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone rmdir" +slug: rclone_rmdir +url: /commands/rclone_rmdir/ +--- +## rclone rmdir + +Remove the path if empty. + +### Synopsis + + + +Remove the path. Note that you can't remove a path with +objects in it, use purge for that. + +``` +rclone rmdir remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md new file mode 100644 index 000000000..609c23449 --- /dev/null +++ b/docs/content/commands/rclone_sha1sum.md @@ -0,0 +1,92 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone sha1sum" +slug: rclone_sha1sum +url: /commands/rclone_sha1sum/ +--- +## rclone sha1sum + +Produces an sha1sum file for all the objects in the path. + +### Synopsis + + + +Produces an sha1sum file for all the objects in the path. This +is in the same format as the standard sha1sum tool produces. + + +``` +rclone sha1sum remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md new file mode 100644 index 000000000..55c3e75a3 --- /dev/null +++ b/docs/content/commands/rclone_size.md @@ -0,0 +1,89 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone size" +slug: rclone_size +url: /commands/rclone_size/ +--- +## rclone size + +Prints the total size and number of objects in remote:path. + +### Synopsis + + +Prints the total size and number of objects in remote:path. + +``` +rclone size remote:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md new file mode 100644 index 000000000..81f22e209 --- /dev/null +++ b/docs/content/commands/rclone_sync.md @@ -0,0 +1,108 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone sync" +slug: rclone_sync +url: /commands/rclone_sync/ +--- +## rclone sync + +Make source and dest identical, modifying destination only. + +### Synopsis + + + +Sync the source to the destination, changing the destination +only. Doesn't transfer unchanged files, testing by size and +modification time or MD5SUM. Destination is updated to match +source, including deleting files if necessary. + +**Important**: Since this can cause data loss, test first with the +`--dry-run` flag to see exactly what would be copied and deleted. + +Note that files in the destination won't be deleted if there were any +errors at any point. + +It is always the contents of the directory that is synced, not the +directory so when source:path is a directory, it's the contents of +source:path that are copied, not the directory name and contents. See +extended explanation in the `copy` command above if unsure. + +If dest:path doesn't exist, it is created and the source:path contents +go there. + + +``` +rclone sync source:path dest:path +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md new file mode 100644 index 000000000..564b45c64 --- /dev/null +++ b/docs/content/commands/rclone_version.md @@ -0,0 +1,89 @@ +--- +date: 2016-08-04T21:37:09+01:00 +title: "rclone version" +slug: rclone_version +url: /commands/rclone_version/ +--- +## rclone version + +Show the version number. + +### Synopsis + + +Show the version number. + +``` +rclone version +``` + +### Options inherited from parent commands + +``` + --acd-templink-threshold value Files >= this size will be downloaded via their tempLink. (default 9G) + --ask-password Allow prompt for password for encrypted configuration. (default true) + --b2-chunk-size value Upload chunk size. Must fit in memory. (default 96M) + --b2-test-mode string A flag string for X-Bz-Test-Mode header. + --b2-upload-cutoff value Cutoff for switching to chunked upload (default 190.735M) + --b2-versions Include old versions in directory listings. + --bwlimit value Bandwidth limit in kBytes/s, or use suffix b|k|M|G + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum & size, not mod-time & size + --config string Config file. (default "/home/ncw/.rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transfering + --delete-before When synchronizing, delete files on destination before transfering + --delete-during When synchronizing, delete files during transfer (default) + --delete-excluded Delete files on dest excluded from sync + --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list. + --drive-chunk-size value Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete) + --drive-upload-cutoff value Cutoff for switching to chunked upload (default 8M) + --drive-use-trash Send files to the trash instead of deleting permanently. + --dropbox-chunk-size value Upload chunk size. Max 150M. (default 128M) + -n, --dry-run Do a trial run with no permanent changes + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-filters Dump the filters to the output + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude string Exclude files matching pattern + --exclude-from string Read exclude patterns from file + --files-from string Read list of source-file names from file + -f, --filter string Add a file-filtering rule + --filter-from string Read filtering patterns from a file + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --include string Include files matching pattern + --include-from string Read include patterns from file + --log-file string Log everything to this file + --low-level-retries int Number of low level retries to do. (default 10) + --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size value Don't transfer any file larger than this in k or suffix b|k|M|G (default off) + --memprofile string Write memory profile to file + --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y + --min-size value Don't transfer any file smaller than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + --onedrive-chunk-size value Above this size files will be chunked - must be multiple of 320k. (default 10M) + --onedrive-upload-cutoff value Cutoff for switching to chunked upload - must be <= 100MB (default 10M) + -q, --quiet Print as little stuff as possible + --retries int Retry operations this many times if they fail (default 3) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval to print stats (0 to disable) (default 1m0s) + --swift-chunk-size value Above this size files will be chunked into a _segments container. (default 5G) + --timeout duration IO idle timeout (default 5m0s) + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + -v, --verbose Print lots more stuff +``` + +### SEE ALSO +* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.32 + +###### Auto generated by spf13/cobra on 4-Aug-2016 diff --git a/docs/content/docs.md b/docs/content/docs.md index 641730989..82880d279 100644 --- a/docs/content/docs.md +++ b/docs/content/docs.md @@ -49,251 +49,34 @@ You can define as many storage paths as you like in the config file. Subcommands ----------- -### rclone copy source:path dest:path ### - -Copy the source to the destination. Doesn't transfer -unchanged files, testing by size and modification time or -MD5SUM. Doesn't delete files from the destination. - -Note that it is always the contents of the directory that is synced, -not the directory so when source:path is a directory, it's the -contents of source:path that are copied, not the directory name and -contents. - -If dest:path doesn't exist, it is created and the source:path contents -go there. - -For example - - rclone copy source:sourcepath dest:destpath - -Let's say there are two files in sourcepath - - sourcepath/one.txt - sourcepath/two.txt - -This copies them to - - destpath/one.txt - destpath/two.txt - -Not to - - destpath/sourcepath/one.txt - destpath/sourcepath/two.txt - -If you are familiar with `rsync`, rclone always works as if you had -written a trailing / - meaning "copy the contents of this directory". -This applies to all commands and whether you are talking about the -source or destination. - -See the `--no-traverse` option for controlling whether rclone lists -the destination directory or not. - -### rclone sync source:path dest:path ### - -Sync the source to the destination, changing the destination -only. Doesn't transfer unchanged files, testing by size and -modification time or MD5SUM. Destination is updated to match -source, including deleting files if necessary. - -**Important**: Since this can cause data loss, test first with the -`--dry-run` flag to see exactly what would be copied and deleted. - -Note that files in the destination won't be deleted if there were any -errors at any point. - -It is always the contents of the directory that is synced, not the -directory so when source:path is a directory, it's the contents of -source:path that are copied, not the directory name and contents. See -extended explanation in the `copy` command above if unsure. - -If dest:path doesn't exist, it is created and the source:path contents -go there. - -### move source:path dest:path ### - -Moves the contents of the source directory to the destination -directory. Rclone will error if the source and destination overlap. - -If no filters are in use and if possible this will server side move -`source:path` into `dest:path`. After this `source:path` will no -longer longer exist. - -Otherwise for each file in `source:path` selected by the filters (if -any) this will move it into `dest:path`. If possible a server side -move will be used, otherwise it will copy it (server side if possible) -into `dest:path` then delete the original (if no errors on copy) in -`source:path`. - -**Important**: Since this can cause data loss, test first with the ---dry-run flag. - -### rclone ls remote:path ### - -List all the objects in the path with size and path. - -### rclone lsd remote:path ### - -List all directories/containers/buckets in the the path. - -### rclone lsl remote:path ### - -List all the objects in the the path with modification time, -size and path. - -### rclone md5sum remote:path ### - -Produces an md5sum file for all the objects in the path. This -is in the same format as the standard md5sum tool produces. - -### rclone sha1sum remote:path ### - -Produces an sha1sum file for all the objects in the path. This -is in the same format as the standard sha1sum tool produces. - -### rclone size remote:path ### - -Prints the total size of objects in remote:path and the number of -objects. - -### rclone mkdir remote:path ### - -Make the path if it doesn't already exist - -### rclone rmdir remote:path ### - -Remove the path. Note that you can't remove a path with -objects in it, use purge for that. - -### rclone purge remote:path ### - -Remove the path and all of its contents. Note that this does not obey -include/exclude filters - everything will be removed. Use `delete` if -you want to selectively delete files. - -### rclone delete remote:path ### - -Remove the contents of path. Unlike `purge` it obeys include/exclude -filters so can be used to selectively delete files. - -Eg delete all files bigger than 100MBytes - -Check what would be deleted first (use either) - - rclone --min-size 100M lsl remote:path - rclone --dry-run --min-size 100M delete remote:path - -Then delete - - rclone --min-size 100M delete remote:path - -That reads "delete everything with a minimum size of 100 MB", hence -delete all files bigger than 100MBytes. - -### rclone check source:path dest:path ### - -Checks the files in the source and destination match. It -compares sizes and MD5SUMs and prints a report of files which -don't match. It doesn't alter the source or destination. - -`--size-only` may be used to only compare the sizes, not the MD5SUMs. - -### rclone cleanup remote:path ### - -Clean up the remote if possible. Empty the trash or delete old file -versions. Not supported by all remotes. - -### rclone dedupe remote:path ### - -By default `dedup` interactively finds duplicate files and offers to -delete all but one or rename them to be different. Only useful with -Google Drive which can have duplicate file names. - -The `dedupe` command will delete all but one of any identical (same -md5sum) files it finds without confirmation. This means that for most -duplicated files the `dedupe` command will not be interactive. You -can use `--dry-run` to see what would happen without doing anything. - -Here is an example run. - -Before - with duplicates - -``` -$ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 6048320 2016-03-05 16:23:11.775000000 one.txt - 564374 2016-03-05 16:23:06.731000000 one.txt - 6048320 2016-03-05 16:18:26.092000000 one.txt - 6048320 2016-03-05 16:22:46.185000000 two.txt - 1744073 2016-03-05 16:22:38.104000000 two.txt - 564374 2016-03-05 16:22:52.118000000 two.txt -``` - -Now the `dedupe` session - -``` -$ rclone dedupe drive:dupes -2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. -one.txt: Found 4 duplicates - deleting identical copies -one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36") -one.txt: 2 duplicates remain - 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 - 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 -s) Skip and do nothing -k) Keep just one (choose which in next step) -r) Rename all to be different (by changing file.jpg to file-1.jpg) -s/k/r> k -Enter the number of the file to keep> 1 -one.txt: Deleted 1 extra copies -two.txt: Found 3 duplicates - deleting identical copies -two.txt: 3 duplicates remain - 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 - 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 - 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802 -s) Skip and do nothing -k) Keep just one (choose which in next step) -r) Rename all to be different (by changing file.jpg to file-1.jpg) -s/k/r> r -two-1.txt: renamed from: two.txt -two-2.txt: renamed from: two.txt -two-3.txt: renamed from: two.txt -``` - -The result being - -``` -$ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 564374 2016-03-05 16:22:52.118000000 two-1.txt - 6048320 2016-03-05 16:22:46.185000000 two-2.txt - 1744073 2016-03-05 16:22:38.104000000 two-3.txt -``` - -Dedupe can be run non interactively using the `--dedupe-mode` flag. - - * `--dedupe-mode interactive` - interactive as above. - * `--dedupe-mode skip` - removes identical files then skips anything left. - * `--dedupe-mode first` - removes identical files then keeps the first one. - * `--dedupe-mode newest` - removes identical files then keeps the newest one. - * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. - * `--dedupe-mode rename` - removes identical files then renames the rest to be different. - -For example to rename all the identically named photos in your Google Photos directory, do - - rclone dedupe --dedupe-mode rename "drive:Google Photos" - -The modes can also be passed as an extra parameter, eg - - rclone dedupe rename "drive:Google Photos" - -### rclone config ### - -Enter an interactive configuration session. - -### rclone help ### - -Prints help on rclone commands and options. +rclone uses a system of subcommands. For example + + rclone ls remote:path # lists a re + rclone copy /local/path remote:path # copies /local/path to the remote + rclone sync /local/path remote:path # syncs /local/path to the remote + +The main rclone commands with most used first + +* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. +* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied +* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. +* [rclone move](/commands/rclone_move/) - Move files from source to dest. +* [rclone delete](/commands/rclone_delete/) - Remove the contents of path. +* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. +* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist. +* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path. +* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match. +* [rclone ls](/commands/rclone_ls/) - List all the objects in the the path with size and path. +* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the the path. +* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path. +* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path. +* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path. +* [rclone size](/commands/rclone_size/) - Returns the total size and number of objects in remote:path. +* [rclone version](/commands/rclone_version/) - Show the version number. +* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible +* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them. + +See the [commands index](/commands/) for the full list. Copying single files -------------------- diff --git a/docs/layouts/chrome/navbar.html b/docs/layouts/chrome/navbar.html index 2c12e241e..bd38c3b2b 100644 --- a/docs/layouts/chrome/navbar.html +++ b/docs/layouts/chrome/navbar.html @@ -27,6 +27,24 @@
  • Privacy Policy
  • +