From a6387e1f81a2da09262a9b26b838554e971b2b00 Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Mon, 26 Aug 2019 15:25:20 +0100 Subject: [PATCH] Version v1.49.0 --- MANUAL.html | 3122 ++++++++----- MANUAL.md | 3345 ++++++++++---- MANUAL.txt | 3424 ++++++++++---- bin/make_manual.py | 2 +- docs/content/azureblob.md | 15 +- docs/content/b2.md | 13 + docs/content/changelog.md | 152 +- docs/content/commands/rclone.md | 2 +- docs/content/commands/rclone_about.md | 2 +- docs/content/commands/rclone_authorize.md | 2 +- docs/content/commands/rclone_cachestats.md | 2 +- docs/content/commands/rclone_cat.md | 2 +- docs/content/commands/rclone_check.md | 2 +- docs/content/commands/rclone_cleanup.md | 2 +- docs/content/commands/rclone_config.md | 5 +- docs/content/commands/rclone_config_create.md | 2 +- docs/content/commands/rclone_config_delete.md | 2 +- .../commands/rclone_config_disconnect.md | 36 + docs/content/commands/rclone_config_dump.md | 2 +- docs/content/commands/rclone_config_edit.md | 2 +- docs/content/commands/rclone_config_file.md | 2 +- .../commands/rclone_config_password.md | 2 +- .../commands/rclone_config_providers.md | 2 +- .../commands/rclone_config_reconnect.md | 36 + docs/content/commands/rclone_config_show.md | 2 +- docs/content/commands/rclone_config_update.md | 2 +- .../commands/rclone_config_userinfo.md | 34 + docs/content/commands/rclone_copy.md | 2 +- docs/content/commands/rclone_copyto.md | 2 +- docs/content/commands/rclone_copyurl.md | 2 +- docs/content/commands/rclone_cryptcheck.md | 2 +- docs/content/commands/rclone_cryptdecode.md | 2 +- docs/content/commands/rclone_dbhashsum.md | 2 +- docs/content/commands/rclone_dedupe.md | 2 +- docs/content/commands/rclone_delete.md | 2 +- docs/content/commands/rclone_deletefile.md | 2 +- .../commands/rclone_genautocomplete.md | 2 +- .../commands/rclone_genautocomplete_bash.md | 2 +- .../commands/rclone_genautocomplete_zsh.md | 2 +- docs/content/commands/rclone_gendocs.md | 2 +- docs/content/commands/rclone_hashsum.md | 2 +- docs/content/commands/rclone_link.md | 2 +- docs/content/commands/rclone_listremotes.md | 2 +- docs/content/commands/rclone_ls.md | 2 +- docs/content/commands/rclone_lsd.md | 2 +- docs/content/commands/rclone_lsf.md | 2 +- docs/content/commands/rclone_lsjson.md | 2 +- docs/content/commands/rclone_lsl.md | 2 +- docs/content/commands/rclone_md5sum.md | 2 +- docs/content/commands/rclone_mkdir.md | 2 +- docs/content/commands/rclone_mount.md | 7 +- docs/content/commands/rclone_move.md | 2 +- docs/content/commands/rclone_moveto.md | 2 +- docs/content/commands/rclone_ncdu.md | 3 +- docs/content/commands/rclone_obscure.md | 2 +- docs/content/commands/rclone_purge.md | 2 +- docs/content/commands/rclone_rc.md | 2 +- docs/content/commands/rclone_rcat.md | 2 +- docs/content/commands/rclone_rcd.md | 2 +- docs/content/commands/rclone_rmdir.md | 2 +- docs/content/commands/rclone_rmdirs.md | 2 +- docs/content/commands/rclone_serve.md | 2 +- docs/content/commands/rclone_serve_dlna.md | 2 +- docs/content/commands/rclone_serve_ftp.md | 69 +- docs/content/commands/rclone_serve_http.md | 11 +- docs/content/commands/rclone_serve_restic.md | 11 +- docs/content/commands/rclone_serve_sftp.md | 69 +- docs/content/commands/rclone_serve_webdav.md | 78 +- docs/content/commands/rclone_settier.md | 2 +- docs/content/commands/rclone_sha1sum.md | 2 +- docs/content/commands/rclone_size.md | 2 +- docs/content/commands/rclone_sync.md | 2 +- docs/content/commands/rclone_touch.md | 2 +- docs/content/commands/rclone_tree.md | 2 +- docs/content/commands/rclone_version.md | 2 +- docs/content/flags.md | 35 +- docs/content/http.md | 19 + docs/content/koofr.md | 9 + docs/content/local.md | 26 + docs/content/putio.md | 3 + docs/content/sftp.md | 18 + docs/content/union.md | 2 +- docs/content/webdav.md | 13 + docs/layouts/partials/version.html | 2 +- fs/version.go | 2 +- rclone.1 | 4066 +++++++++++++---- 86 files changed, 10711 insertions(+), 4030 deletions(-) create mode 100644 docs/content/commands/rclone_config_disconnect.md create mode 100644 docs/content/commands/rclone_config_reconnect.md create mode 100644 docs/content/commands/rclone_config_userinfo.md diff --git a/MANUAL.html b/MANUAL.html index 0cbdfbda5..76b71a9b4 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -17,24 +17,26 @@

rclone(1) User Manual

Nick Craig-Wood

-

Jun 15, 2019

+

Aug 26, 2019

-

Rclone

-

Logo

+

Rclone - rsync for cloud storage

Rclone is a command line program to sync files and directories to and from:

Links

–s3-storage-class

@@ -5531,7 +6294,7 @@ In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archi -

Advanced Options

+

Advanced Options

Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).

–s3-bucket-acl

Canned ACL used when creating buckets.

@@ -5959,9 +6722,8 @@ n/s> n name> wasabi Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) +[snip] +XX / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" [snip] Storage> s3 @@ -6166,33 +6928,11 @@ n/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 +[snip] +XX / Backblaze B2 \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 3 +[snip] +Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key @@ -6298,8 +7038,21 @@ $ rclone -q --b2-versions ls b2:cleanup-test 15 one-v2016-07-02-155621-000.txt

Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.

Note that when using --b2-versions no file write operations are permitted, so you can’t upload files or delete them.

+ +

Rclone supports generating file share links for private B2 buckets. They can either be for a file for example:

+
./rclone link B2:bucket/path/to/file.txt
+https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
+
+

or if run on a directory you will get:

+
./rclone link B2:bucket/path
+https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx
+

you can then use the authorization token (the part of the url from the ?Authorization= on) on any file path under that directory. For example:

+
https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
+https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
+https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx
+
-

Standard Options

+

Standard Options

Here are the standard options specific to b2 (Backblaze B2).

–b2-account

Account ID or Application Key ID

@@ -6325,7 +7078,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
  • Type: bool
  • Default: false
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to b2 (Backblaze B2).

    –b2-endpoint

    Endpoint for the service. Leave blank normally.

    @@ -6387,13 +7140,22 @@ $ rclone -q --b2-versions ls b2:cleanup-test

    –b2-download-url

    Custom endpoint for downloads.

    -

    This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. Leave blank if you want to use the endpoint provided by Backblaze.

    +

    This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze.

    +

    –b2-download-auth-duration

    +

    Time before the authorization token will expire in s or suffix ms|s|m|h|d.

    +

    The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week.

    +

    Box

    Paths are specified as remote:path

    @@ -6410,38 +7172,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box +[snip] +XX / Box \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" -10 / Hubic - \ "hubic" -11 / Local Disk - \ "local" -12 / Microsoft OneDrive - \ "onedrive" -13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -14 / SSH/SFTP Connection - \ "sftp" -15 / Yandex Disk - \ "yandex" -16 / http Connection - \ "http" +[snip] Storage> box Box App Client Id - leave blank normally. client_id> @@ -6552,7 +7286,7 @@ y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -

    Modified time and hashes

    +

    Modified time and hashes

    Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    Box supports SHA1 type hashes, so you can use the --checksum flag.

    Transfers

    @@ -6560,7 +7294,7 @@ y/e/d> y

    Deleting files

    Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to box (Box).

    –box-client-id

    Box App Client Id. Leave blank normally.

    @@ -6578,7 +7312,7 @@ y/e/d> y
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to box (Box).

    –box-upload-cutoff

    Cutoff for switching to multipart upload (>= 50MB).

    @@ -6617,11 +7351,11 @@ n/r/c/s/q> n name> test-cache Type of storage to configure. Choose a number from below, or type in your own value -... - 5 / Cache a remote +[snip] +XX / Cache a remote \ "cache" -... -Storage> 5 +[snip] +Storage> cache Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -6757,7 +7491,7 @@ chunk_total_size = 10G

    Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.

    Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to cache (Cache a remote).

    –cache-remote

    Remote to cache. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).

    @@ -6862,7 +7596,7 @@ chunk_total_size = 10G -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to cache (Cache a remote).

    –cache-plex-token

    The plex token for authentication - auto set normally

    @@ -7010,33 +7744,11 @@ n/s/q> n name> secret Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote +[snip] +XX / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 5 +[snip] +Storage> crypt Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -7177,12 +7889,12 @@ $ rclone -q ls secret:

    Encrypts the whole file path including directory names Example: 1/12/123.txt is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0

    False

    Only encrypts file names, skips directory names Example: 1/12/123.txt is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0

    -

    Modified time and hashes

    +

    Modified time and hashes

    Crypt stores modification times using the underlying remote so support depends on that.

    Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

    Note that you should use the rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can’t check the checksums properly.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to crypt (Encrypt/Decrypt a remote).

    –crypt-remote

    Remote to encrypt/decrypt. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).

    @@ -7250,7 +7962,7 @@ $ rclone -q ls secret:
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).

    –crypt-show-mapping

    For all files listed show how the names encrypt.

    @@ -7341,33 +8053,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox +[snip] +XX / Dropbox \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 4 +[snip] +Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. @@ -7399,12 +8089,12 @@ y/e/d> y

    If you wish to see Team Folders you must use a leading / in the path, so rclone lsd remote:/ will refer to the root and show you all Team Folders and your User Folder.

    You can then use team folders like this remote:/TeamFolder and remote:/TeamFolder/path/to/file.

    A leading / for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.

    -

    Modified time and Hashes

    +

    Modified time and Hashes

    Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

    This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don’t want this to happen use --size-only or --checksum flag to stop it.

    Dropbox supports its own hash type which is checked for all transfers.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to dropbox (Dropbox).

    –dropbox-client-id

    Dropbox App Client Id Leave blank normally.

    @@ -7422,7 +8112,7 @@ y/e/d> y
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to dropbox (Dropbox).

    –dropbox-chunk-size

    Upload chunk size. (< 150M).

    @@ -7464,7 +8154,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] -10 / FTP Connection +XX / FTP Connection \ "ftp" [snip] Storage> ftp @@ -7520,7 +8210,7 @@ y/e/d> y

    Implicit TLS

    FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is 990 so the port will likely have to be explictly set in the config for the remote.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to ftp (FTP Connection).

    –ftp-host

    FTP host to connect to

    @@ -7569,7 +8259,7 @@ y/e/d> y
  • Type: bool
  • Default: false
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to ftp (FTP Connection).

    –ftp-concurrency

    Maximum number of FTP simultaneous connections, 0 for unlimited

    @@ -7608,33 +8298,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) +[snip] +XX / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 6 +[snip] +Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. @@ -7764,7 +8432,7 @@ y/e/d> y

    Modified time

    Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the “mtime” key in RFC3339 format accurate to 1ns.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    –gcs-client-id

    Google Application Client Id Leave blank normally.

    @@ -8033,7 +8701,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -10 / Google Drive +XX / Google Drive \ "drive" [snip] Storage> drive @@ -8457,7 +9125,7 @@ trashed=false and 'c' in parents -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to drive (Google Drive).

    –drive-client-id

    Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.

    @@ -8526,7 +9194,7 @@ trashed=false and 'c' in parents
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to drive (Google Drive).

    –drive-service-account-credentials

    Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.

    @@ -8754,7 +9422,7 @@ trashed=false and 'c' in parents

    This is because rclone can’t find out the size of the Google docs without downloading them.

    Google docs will transfer correctly with rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer.

    However an unfortunate consequence of this is that you can’t download Google docs using rclone mount - you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable.

    -

    Duplicated files

    +

    Duplicated files

    Sometimes, for no reason I’ve been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.

    Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

    Use rclone dedupe to fix duplicated files.

    @@ -8770,12 +9438,229 @@ trashed=false and 'c' in parents
    1. Log into the Google API Console with your Google account. It doesn’t matter what Google account you use. (It need not be the same account as the Google Drive you want to access)

    2. Select a project or create a new project.

    3. -
    4. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the then “Google Drive API”.

    5. +
    6. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the “Google Drive API”.

    7. Click “Credentials” in the left-side panel (not “Create credentials”, which opens the wizard), then “Create credentials”, then “OAuth client ID”. It will prompt you to set the OAuth consent screen product name, if you haven’t set one already.

    8. Choose an application type of “other”, and click “Create”. (the default name is fine)

    9. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.

    (Thanks to @balazer on github for these instructions.)

    +

    Google Photos

    +

    The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.

    +

    NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.

    +

    Configuring Google Photos

    +

    The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Google Photos
    +   \ "google photos"
    +[snip]
    +Storage> google photos
    +** See help for google photos backend at: https://rclone.org/googlephotos/ **
    +
    +Google Application Client Id
    +Leave blank normally.
    +Enter a string value. Press Enter for the default ("").
    +client_id> 
    +Google Application Client Secret
    +Leave blank normally.
    +Enter a string value. Press Enter for the default ("").
    +client_secret> 
    +Set to make the Google Photos backend read only.
    +
    +If you choose read only then rclone will only request read only access
    +to your photos, otherwise rclone will request full access.
    +Enter a boolean value (true or false). Press Enter for the default ("false").
    +read_only> 
    +Edit advanced config? (y/n)
    +y) Yes
    +n) No
    +y/n> n
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +
    +*** IMPORTANT: All media items uploaded to Google Photos with rclone
    +*** are stored in full resolution at original quality.  These uploads
    +*** will count towards storage in your Google Account.
    +
    +--------------------
    +[remote]
    +type = google photos
    +token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

    +

    This remote is called remote and can now be used like this

    +

    See all the albums in your photos

    +
    rclone lsd remote:album
    +

    Make a new album

    +
    rclone mkdir remote:album/newAlbum
    +

    List the contents of an album

    +
    rclone ls remote:album/newAlbum
    +

    Sync /home/local/images to the Google Photos, removing any excess files in the album.

    +
    rclone sync /home/local/image remote:album/newAlbum
    +

    Layout

    +

    As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it.

    +

    The directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)

    +

    Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you’ve put them into albums.

    +
    /
    +- upload
    +    - file1.jpg
    +    - file2.jpg
    +    - ...
    +- media
    +    - all
    +        - file1.jpg
    +        - file2.jpg
    +        - ...
    +    - by-year
    +        - 2000
    +            - file1.jpg
    +            - ...
    +        - 2001
    +            - file2.jpg
    +            - ...
    +        - ...
    +    - by-month
    +        - 2000
    +            - 2000-01
    +                - file1.jpg
    +                - ...
    +            - 2000-02
    +                - file2.jpg
    +                - ...
    +        - ...
    +    - by-day
    +        - 2000
    +            - 2000-01-01
    +                - file1.jpg
    +                - ...
    +            - 2000-01-02
    +                - file2.jpg
    +                - ...
    +        - ...
    +- album
    +    - album name
    +    - album name/sub
    +- shared-album
    +    - album name
    +    - album name/sub
    +

    There are two writable parts of the tree, the upload directory and sub directories of the the album directory.

    +

    The upload directory is for uploading files you don’t want to put into albums. This will be empty to start with and will contain the files you’ve uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better.

    +

    Directories within the album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you do

    +
    rclone copy /path/to/images remote:album/images
    +

    and the images directory contains

    +
    images
    +    - file1.jpg
    +    dir
    +        file2.jpg
    +    dir2
    +        dir3
    +            file3.jpg
    +

    Then rclone will create the following albums with the following files in

    + +

    This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing.

    +

    The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.

    +

    Limitations

    +

    Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn’t understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.

    +

    Note that all media items uploaded to Google Photos through the API are stored in full resolution at “original quality” and will count towards your storage quota in your Google Account. The API does not offer a way to upload in “high quality” mode..

    +

    Downloading Images

    +

    When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.

    +

    Downloading Videos

    +

    When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.

    +

    Duplicates

    +

    If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).

    +

    If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn’t cause too many problems.

    +

    Modified time

    +

    The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.

    +

    This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.

    +

    Size

    +

    The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.

    +

    It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter.

    +

    If you want to use the backend with rclone mount you will need to enable this flag otherwise you will not be able to read media off the mount.

    +

    Albums

    +

    Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.

    +

    Rclone can remove files it uploaded from albums it created only.

    +

    Deleting files

    +

    Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.

    +

    Rclone cannot delete files anywhere except under album.

    +

    Deleting albums

    +

    The Google Photos API does not support deleting albums - see bug #135714733.

    + +

    Standard Options

    +

    Here are the standard options specific to google photos (Google Photos).

    +

    –gphotos-client-id

    +

    Google Application Client Id Leave blank normally.

    + +

    –gphotos-client-secret

    +

    Google Application Client Secret Leave blank normally.

    + +

    –gphotos-read-only

    +

    Set to make the Google Photos backend read only.

    +

    If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.

    + +

    Advanced Options

    +

    Here are the advanced options specific to google photos (Google Photos).

    +

    –gphotos-read-size

    +

    Set to read the size of media items.

    +

    Normally rclone does not read the size of media items since this takes another transaction. This isn’t necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.

    + +

    HTTP

    The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn’t then please file an issue, or send a pull request!)

    Paths are specified as remote: or remote:path/to/dir.

    @@ -8790,36 +9675,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -15 / http Connection +[snip] +XX / http Connection \ "http" +[snip] Storage> http URL of http host to connect to Choose a number from below, or type in your own value @@ -8858,7 +9717,7 @@ e/n/d/r/c/s/q> q
    rclone sync remote:directory /home/local/directory

    Read only

    This remote is read only - you can’t upload files to an HTTP server.

    -

    Modified time

    +

    Modified time

    Most HTTP servers store time accurate to 1 second.

    Checksum

    No checksums are stored.

    @@ -8866,7 +9725,7 @@ e/n/d/r/c/s/q> q

    Since the http remote only has one config parameter it is easy to use without a config file:

    rclone lsd --http-url https://beta.rclone.org :http:
    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to http (http Connection).

    –http-url

    URL of http host to connect to

    @@ -8887,8 +9746,20 @@ e/n/d/r/c/s/q> q -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to http (http Connection).

    +

    –http-headers

    +

    Set HTTP headers for all transactions

    +

    Use this to set additional HTTP headers for all transactions

    +

    The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.

    +

    For example to set a Cookie use ‘Cookie,name=value’, or ‘“Cookie”,“name=value”’.

    +

    You can set multiple headers, eg ‘“Cookie”,“name=value”,“Authorization”,“xxx”’.

    +

    –http-no-slash

    Set this if the site doesn’t end directories with /

    Use this if your target website does not use / on the end of directories.

    @@ -8914,33 +9785,11 @@ n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic +[snip] +XX / Hubic \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 8 +[snip] +Storage> hubic Hubic Client Id - leave blank normally. client_id> Hubic Client Secret - leave blank normally. @@ -8979,12 +9828,12 @@ y/e/d> y
    rclone copy /home/source remote:default/backup

    –fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    +

    Modified time

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    Note that Hubic wraps the Swift backend, so most of the properties of are the same.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to hubic (Hubic).

    –hubic-client-id

    Hubic Client Id Leave blank normally.

    @@ -9002,7 +9851,7 @@ y/e/d> y
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to hubic (Hubic).

    –hubic-chunk-size

    Above this size files will be chunked into a _segments container.

    @@ -9025,7 +9874,7 @@ y/e/d> y
  • Default: false
  • -

    Limitations

    +

    Limitations

    This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

    The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these.

    Jottacloud

    @@ -9045,15 +9894,12 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] -14 / JottaCloud +XX / JottaCloud \ "jottacloud" [snip] Storage> jottacloud ** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** -User Name: -Enter a string value. Press Enter for the default (""). -user> user@email.tld Edit advanced config? (y/n) y) Yes n) No @@ -9067,6 +9913,7 @@ Rclone has it's own Jottacloud API KEY which works fine as long as one only y) Yes n) No y/n> y +Username> 0xC4KE@gmail.com Your Jottacloud password is only required during setup and will not be stored. password: @@ -9078,7 +9925,7 @@ y/n> y Please select the device to use. Normally this will be Jotta Choose a number from below, or type in an existing value 1 > DESKTOP-3H31129 - 2 > test1 + 2 > fla1 3 > Jotta Devices> 3 Please select the mountpoint to user. Normally this will be Archive @@ -9113,11 +9960,11 @@ y/e/d> y

    –fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.

    -

    Modified time and hashes

    +

    Modified time and hashes

    Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    Jottacloud supports MD5 type hashes, so you can use the --checksum flag.

    Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag.

    -

    Deleting files

    +

    Deleting files

    By default rclone will send all files to the trash when deleting files. Due to a lack of API documentation emptying the trash is currently only possible via the Jottacloud website. If deleting permanently is required then use the --jottacloud-hard-delete flag, or set the equivalent environment variable.

    Versions

    Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.

    @@ -9126,17 +9973,7 @@ y/e/d> y

    Device IDs

    Jottacloud requires each ‘device’ to be registered. Rclone brings such a registration to easily access your account but if you want to use Jottacloud together with rclone on multiple machines you NEED to create a seperate deviceID/deviceSecrect on each machine. You will asked during setting up the remote. Please be aware that this also means that copying the rclone config from one machine to another does NOT work with Jottacloud accounts. You have to create it on each machine.

    -

    Standard Options

    -

    Here are the standard options specific to jottacloud (JottaCloud).

    -

    –jottacloud-user

    -

    User Name:

    - -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to jottacloud (JottaCloud).

    –jottacloud-md5-memory-limit

    Files bigger than this will be cached on disk to calculate the MD5 if required.

    @@ -9171,7 +10008,7 @@ y/e/d> y
  • Default: 10M
  • -

    Limitations

    +

    Limitations

    Note that Jottacloud is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.

    There are quite a few characters that can’t be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.

    Jottacloud only supports filenames up to 255 characters in length.

    @@ -9193,60 +10030,10 @@ name> koofr Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 1 / A stackable unification remote, which can appear to merge the contents of several remotes - \ "union" - 2 / Alias for an existing remote - \ "alias" - 3 / Amazon Drive - \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) - \ "s3" - 5 / Backblaze B2 - \ "b2" - 6 / Box - \ "box" - 7 / Cache a remote - \ "cache" - 8 / Dropbox - \ "dropbox" - 9 / Encrypt/Decrypt a remote - \ "crypt" -10 / FTP Connection - \ "ftp" -11 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -12 / Google Drive - \ "drive" -13 / Hubic - \ "hubic" -14 / JottaCloud - \ "jottacloud" -15 / Koofr +[snip] +XX / Koofr \ "koofr" -16 / Local Disk - \ "local" -17 / Mega - \ "mega" -18 / Microsoft Azure Blob Storage - \ "azureblob" -19 / Microsoft OneDrive - \ "onedrive" -20 / OpenDrive - \ "opendrive" -21 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -22 / Pcloud - \ "pcloud" -23 / QingCloud Object Storage - \ "qingstor" -24 / SSH/SFTP Connection - \ "sftp" -25 / Webdav - \ "webdav" -26 / Yandex Disk - \ "yandex" -27 / http Connection - \ "http" +[snip] Storage> koofr ** See help for koofr backend at: https://rclone.org/koofr/ ** @@ -9286,7 +10073,7 @@ y/e/d> y

    To copy a local directory to an Koofr directory called backup

    rclone copy /home/source remote:backup
    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to koofr (Koofr).

    –koofr-user

    Your Koofr user name

    @@ -9304,7 +10091,7 @@ y/e/d> y
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to koofr (Koofr).

    –koofr-endpoint

    The Koofr API endpoint to use

    @@ -9322,8 +10109,16 @@ y/e/d> y
  • Type: string
  • Default: ""
  • +

    –koofr-setmtime

    +

    Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.

    + -

    Limitations

    +

    Limitations

    Note that Koofr is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.

    Mega

    Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.

    @@ -9341,14 +10136,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" [snip] -14 / Mega +XX / Mega \ "mega" [snip] -23 / http Connection - \ "http" Storage> mega User name user> you@example.com @@ -9380,9 +10171,9 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an Mega directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    Mega does not support modification times or hashes yet.

    -

    Duplicated files

    +

    Duplicated files

    Mega can have two files with exactly the same name and path (unlike a normal file system).

    Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

    Use rclone dedupe to fix duplicated files.

    @@ -9398,7 +10189,7 @@ y/e/d> y

    Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.

    So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to mega (Mega).

    –mega-user

    User name

    @@ -9416,7 +10207,7 @@ y/e/d> y
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to mega (Mega).

    –mega-debug

    Output more debug from Mega.

    @@ -9437,7 +10228,7 @@ y/e/d> y
  • Default: false
  • -

    Limitations

    +

    Limitations

    This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn’t appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

    Mega allows duplicate files which may confuse rclone.

    Microsoft Azure Blob Storage

    @@ -9453,40 +10244,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" -10 / Hubic - \ "hubic" -11 / Local Disk - \ "local" -12 / Microsoft Azure Blob Storage +[snip] +XX / Microsoft Azure Blob Storage \ "azureblob" -13 / Microsoft OneDrive - \ "onedrive" -14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -15 / SSH/SFTP Connection - \ "sftp" -16 / Yandex Disk - \ "yandex" -17 / http Connection - \ "http" +[snip] Storage> azureblob Storage Account Name account> account_name @@ -9515,7 +10276,7 @@ y/e/d> y
    rclone sync /home/local/directory remote:container

    –fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    +

    Modified time

    The modified time is stored as metadata on the object with the mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.

    Hashes

    MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk.

    @@ -9540,10 +10301,10 @@ rclone ls azureblob:othercontainer

    Files can’t be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M.

    Note that rclone doesn’t commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won’t allow more than that amount of uncommitted blocks.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).

    –azureblob-account

    -

    Storage Account Name (leave blank to use connection string or SAS URL)

    +

    Storage Account Name (leave blank to use SAS URL or Emulator)

    –azureblob-key

    -

    Storage Account Key (leave blank to use connection string or SAS URL)

    +

    Storage Account Key (leave blank to use SAS URL or Emulator)

    –azureblob-sas-url

    -

    SAS URL for container level access only (leave blank if using account/key or connection string)

    +

    SAS URL for container level access only (leave blank if using account/key or Emulator)

    -

    Advanced Options

    +

    –azureblob-use-emulator

    +

    Uses local storage emulator if provided as ‘true’ (leave blank if using real azure storage endpoint)

    + +

    Advanced Options

    Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).

    –azureblob-endpoint

    Endpoint for the service Leave blank normally.

    @@ -9613,8 +10382,10 @@ rclone ls azureblob:othercontainer
  • Default: ""
  • -

    Limitations

    +

    Limitations

    MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

    +

    Azure Storage Emulator Support

    +

    You can test rlcone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with rclone config follow instructions described in introduction, set use_emulator config as true, you do not need to provide default account name or key if using emulator.

    Microsoft OneDrive

    Paths are specified as remote:path

    Paths may be as deep as required, eg remote:directory/subdirectory.

    @@ -9634,11 +10405,11 @@ name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value -... -18 / Microsoft OneDrive +[snip] +XX / Microsoft OneDrive \ "onedrive" -... -Storage> 18 +[snip] +Storage> onedrive Microsoft App Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). @@ -9713,14 +10484,14 @@ y/e/d> y
  • Scroll to the bottom and click Save.
  • Now the application is complete. Run rclone config to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.

    -

    Modified time and hashes

    +

    Modified time and hashes

    OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.

    For all types of OneDrive you can use the --checksum flag.

    -

    Deleting files

    +

    Deleting files

    Any files you delete with rclone will end up in the trash. Microsoft doesn’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft’s apps or via the OneDrive website.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to onedrive (Microsoft OneDrive).

    –onedrive-client-id

    Microsoft App Client Id Leave blank normally.

    @@ -9738,7 +10509,7 @@ y/e/d> y
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to onedrive (Microsoft OneDrive).

    –onedrive-chunk-size

    Chunk size to upload files with - must be multiple of 320k.

    @@ -9775,7 +10546,7 @@ y/e/d> y
  • Default: false
  • -

    Limitations

    +

    Limitations

    Note that OneDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.

    There are quite a few characters that can’t be in OneDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    The largest allowed file sizes are 15GB for OneDrive for Business and 35GB for OneDrive Personal (Updated 4 Jan 2019).

    @@ -9829,35 +10600,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / OpenDrive +[snip] +XX / OpenDrive \ "opendrive" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -Storage> 10 +[snip] +Storage> opendrive Username username> Password @@ -9886,7 +10633,7 @@ y/e/d> y

    Modified time and MD5SUMs

    OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to opendrive (OpenDrive).

    –opendrive-username

    Username

    @@ -9905,7 +10652,7 @@ y/e/d> y
  • Default: ""
  • -

    Limitations

    +

    Limitations

    Note that OpenDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.

    There are quite a few characters that can’t be in OpenDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    QingStor

    @@ -9923,37 +10670,11 @@ n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / QingStor Object Storage +[snip] +XX / QingStor Object Storage \ "qingstor" -14 / SSH/SFTP Connection - \ "sftp" -15 / Yandex Disk - \ "yandex" -Storage> 13 +[snip] +Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step @@ -10027,7 +10748,7 @@ y/e/d> y -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to qingstor (QingCloud Object Storage).

    –qingstor-env-auth

    Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.

    @@ -10098,7 +10819,7 @@ y/e/d> y -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to qingstor (QingCloud Object Storage).

    –qingstor-connection-retries

    Number of connection retries.

    @@ -10161,48 +10882,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Cache a remote - \ "cache" - 6 / Dropbox - \ "dropbox" - 7 / Encrypt/Decrypt a remote - \ "crypt" - 8 / FTP Connection - \ "ftp" - 9 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -10 / Google Drive - \ "drive" -11 / Hubic - \ "hubic" -12 / Local Disk - \ "local" -13 / Microsoft Azure Blob Storage - \ "azureblob" -14 / Microsoft OneDrive - \ "onedrive" -15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +[snip] +XX / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -16 / Pcloud - \ "pcloud" -17 / QingCloud Object Storage - \ "qingstor" -18 / SSH/SFTP Connection - \ "sftp" -19 / Webdav - \ "webdav" -20 / Yandex Disk - \ "yandex" -21 / http Connection - \ "http" +[snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value @@ -10325,7 +11008,7 @@ rclone lsd myremote:

    As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

    For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    –swift-env-auth

    Get swift credentials from environment variables in standard OpenStack form.

    @@ -10540,7 +11223,7 @@ rclone lsd myremote: -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    –swift-chunk-size

    Above this size files will be chunked into a _segments container.

    @@ -10563,10 +11246,10 @@ rclone lsd myremote:
  • Default: false
  • -

    Modified time

    +

    Modified time

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    -

    Limitations

    +

    Limitations

    The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these.

    Troubleshooting

    Rclone gives Failed to create file system for “remote:”: Bad Request

    @@ -10590,44 +11273,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" -10 / Hubic - \ "hubic" -11 / Local Disk - \ "local" -12 / Microsoft Azure Blob Storage - \ "azureblob" -13 / Microsoft OneDrive - \ "onedrive" -14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -15 / Pcloud +[snip] +XX / Pcloud \ "pcloud" -16 / QingCloud Object Storage - \ "qingstor" -17 / SSH/SFTP Connection - \ "sftp" -18 / Yandex Disk - \ "yandex" -19 / http Connection - \ "http" +[snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> @@ -10663,13 +11312,13 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an pCloud directory called backup

    rclone copy /home/source remote:backup
    -

    Modified time and hashes

    +

    Modified time and hashes

    pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

    pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum flag.

    -

    Deleting files

    +

    Deleting files

    Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to pcloud (Pcloud).

    –pcloud-client-id

    Pcloud App Client Id Leave blank normally.

    @@ -10688,8 +11337,151 @@ y/e/d> y
  • Default: ""
  • +

    premiumize.me

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, eg remote:directory/subdirectory.

    +

    The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / premiumize.me
    +   \ "premiumizeme"
    +[snip]
    +Storage> premiumizeme
    +** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **
    +
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[remote]
    +type = premiumizeme
    +token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> 
    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your premiumize.me

    +
    rclone lsd remote:
    +

    List all the files in your premiumize.me

    +
    rclone ls remote:
    +

    To copy a local directory to an premiumize.me directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Modified time and hashes

    +

    premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work.

    + +

    Standard Options

    +

    Here are the standard options specific to premiumizeme (premiumize.me).

    +

    –premiumizeme-api-key

    +

    API Key.

    +

    This is not normally used - use oauth instead.

    + + +

    Limitations

    +

    Note that premiumize.me is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.

    +

    premiumize.me file names can’t have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents and

    +

    premiumize.me only supports filenames up to 255 characters in length.

    +

    put.io

    +

    Paths are specified as remote:path

    +

    put.io paths may be as deep as required, eg remote:directory/subdirectory.

    +

    The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> putio
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Put.io
    +   \ "putio"
    +[snip]
    +Storage> putio
    +** See help for putio backend at: https://rclone.org/putio/ **
    +
    +Remote config
    +Use auto config?
    + * Say Y if not sure
    + * Say N if you are working on a remote or headless machine
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[putio]
    +type = putio
    +token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +Current remotes:
    +
    +Name                 Type
    +====                 ====
    +putio                putio
    +
    +e) Edit existing remote
    +n) New remote
    +d) Delete remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/r/c/s/q> q
    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

    +

    You can then use it like this,

    +

    List directories in top level of your put.io

    +
    rclone lsd remote:
    +

    List all the files in your put.io

    +
    rclone ls remote:
    +

    To copy a local directory to a put.io directory called backup

    +
    rclone copy /home/source remote:backup
    + +

    SFTP

    SFTP is the Secure (or SSH) File Transfer Protocol.

    +

    The SFTP backend can be used with a number of different providers:

    +

    SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user’s home directory.

    "Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /.

    @@ -10704,36 +11496,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection +[snip] +XX / SSH/SFTP Connection \ "sftp" -14 / Yandex Disk - \ "yandex" -15 / http Connection - \ "http" +[snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value @@ -10743,22 +11509,22 @@ host> example.com SSH username, leave blank for current username, ncw user> sftpuser SSH port, leave blank to use default (22) -port> +port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. -key_file> +key_file> Remote config -------------------- [remote] host = example.com user = sftpuser -port = -pass = -key_file = +port = +pass = +key_file = -------------------- y) Yes this is OK e) Edit this remote @@ -10791,12 +11557,12 @@ y/e/d> y

    And then at the end of the session

    eval `ssh-agent -k`

    These commands can be used in scripts of course.

    -

    Modified time

    +

    Modified time

    Modified times are stored on the server to 1 second precision.

    Modified times are used in syncing and are fully supported.

    Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to sftp (SSH/SFTP Connection).

    –sftp-host

    SSH host to connect to

    @@ -10864,7 +11630,7 @@ y/e/d> y
  • Default: false
  • –sftp-use-insecure-cipher

    -

    Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.

    +

    Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.

  • “true”
  • @@ -10890,7 +11656,7 @@ y/e/d> y
  • Type: bool
  • Default: false
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to sftp (SSH/SFTP Connection).

    –sftp-ask-password

    Allow asking for SFTP password when needed.

    @@ -10921,8 +11687,24 @@ y/e/d> y
  • Type: bool
  • Default: true
  • +

    –sftp-md5sum-command

    +

    The command used to read md5 hashes. Leave blank for autodetect.

    + +

    –sftp-sha1sum-command

    +

    The command used to read sha1 hashes. Leave blank for autodetect.

    + -

    Limitations

    +

    Limitations

    SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote’s PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.

    SFTP also supports about if the same login has shell access and df are in the remote’s PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote’s PATH.

    Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can’t be calculated properly. For them using disable_hashcheck is a good idea.

    @@ -10931,6 +11713,12 @@ y/e/d> y

    SFTP isn’t supported under plan9 until this issue is fixed.

    Note that since SFTP isn’t HTTP based the following flags don’t work with it: --dump-headers, --dump-bodies, --dump-auth

    Note that --timeout isn’t supported (but --contimeout is).

    +

    C14

    +

    C14 is supported through the SFTP backend.

    +

    See C14’s documentation

    +

    rsync.net

    +

    rsync.net is supported through the SFTP backend.

    +

    See rsync.net’s documentation of rclone examples.

    Union

    The union remote provides a unification similar to UnionFS using other remotes.

    Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory.

    @@ -10950,58 +11738,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" - 5 / Box - \ "box" - 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes +[snip] +XX / Union merges the contents of several remotes \ "union" - 7 / Cache a remote - \ "cache" - 8 / Dropbox - \ "dropbox" - 9 / Encrypt/Decrypt a remote - \ "crypt" -10 / FTP Connection - \ "ftp" -11 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -12 / Google Drive - \ "drive" -13 / Hubic - \ "hubic" -14 / JottaCloud - \ "jottacloud" -15 / Local Disk - \ "local" -16 / Mega - \ "mega" -17 / Microsoft Azure Blob Storage - \ "azureblob" -18 / Microsoft OneDrive - \ "onedrive" -19 / OpenDrive - \ "opendrive" -20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -21 / Pcloud - \ "pcloud" -22 / QingCloud Object Storage - \ "qingstor" -23 / SSH/SFTP Connection - \ "sftp" -24 / Webdav - \ "webdav" -25 / Yandex Disk - \ "yandex" -26 / http Connection - \ "http" +[snip] Storage> union List of space separated remotes. Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc. @@ -11040,8 +11780,8 @@ e/n/d/r/c/s/q> q

    Copy another local directory to the union directory called source, which will be placed into C:\dir3

    rclone copy C:\source remote:source
    -

    Standard Options

    -

    Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes).

    +

    Standard Options

    +

    Here are the standard options specific to union (Union merges the contents of several remotes).

    –union-remotes

    List of space separated remotes. Can be ‘remotea:test/dir remoteb:’, ‘“remotea:test/space dir” remoteb:’, etc. The last remote is used to write to.

    +

    Advanced Options

    +

    Here are the advanced options specific to webdav (Webdav).

    +

    –webdav-bearer-token-command

    +

    Command to run to get a bearer token

    +

    Provider notes

    See below for notes on specific providers.

    @@ -11201,18 +11951,6 @@ y/e/d> y

    Owncloud supports modified times using the X-OC-Mtime header.

    Nextcloud

    This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (rcat) whereas Owncloud does. This may be fixed in the future.

    -

    Put.io

    -

    put.io can be accessed in a read only way using webdav.

    -

    Configure the url as https://webdav.put.io and use your normal account username and password for user and pass. Set the vendor to other.

    -

    Your config file should end up looking like this:

    -
    [putio]
    -type = webdav
    -url = https://webdav.put.io
    -vendor = other
    -user = YourUserName
    -pass = encryptedpassword
    -

    If you are using put.io with rclone mount then use the --read-only flag to signal to the OS that it can’t write to the mount.

    -

    For more help see the put.io webdav docs.

    Sharepoint

    Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975

    This means that these accounts can’t be added using the official API (other Accounts should work with the “onedrive” option). However, it is possible to access them using webdav.

    @@ -11231,7 +11969,7 @@ vendor = other user = YourEmailAddress pass = encryptedpassword

    dCache

    -

    dCache is a storage system with WebDAV doors that support, beside basic and x509, authentication with Macaroons (bearer tokens).

    +

    dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens.

    Configure as normal using the other type. Don’t enter a username or password, instead enter your Macaroon as the bearer_token.

    The config will end up looking something like this.

    [dcache]
    @@ -11242,6 +11980,22 @@ user =
     pass =
     bearer_token = your-macaroon

    There is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file.

    +

    Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache.

    +

    OpenID-Connect

    +

    dCache also supports authenticating with OpenID-Connect access tokens. OpenID-Connect is a protocol (based on OAuth 2.0) that allows services to identify users who have authenticated with some central service.

    +

    Support for OpenID-Connect in rclone is currently achieved using another software package called oidc-agent. This is a command-line tool that facilitates obtaining an access token. Once installed and configured, an access token is obtained by running the oidc-token command. The following example shows a (shortened) access token obtained from the XDC OIDC Provider.

    +
    paul@celebrimbor:~$ oidc-token XDC
    +eyJraWQ[...]QFXDt0
    +paul@celebrimbor:~$
    +

    Note Before the oidc-token command will work, the refresh token must be loaded into the oidc agent. This is done with the oidc-add command (e.g., oidc-add XDC). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the oidc-agent documentation.

    +

    The rclone bearer_token_command configuration option is used to fetch the access token from oidc-agent.

    +

    Configure as a normal WebDAV endpoint, using the ‘other’ vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., oidc-agent XDC).

    +

    The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider.

    +
    [dcache]
    +type = webdav
    +url = https://dcache.example.org/
    +vendor = other
    +bearer_token_command = oidc-token XDC

    Yandex Disk

    Yandex Disk is a cloud storage solution created by Yandex.

    Yandex paths may be as deep as required, eg remote:directory/subdirectory.

    @@ -11255,33 +12009,11 @@ n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk +[snip] +XX / Yandex Disk \ "yandex" -Storage> 13 +[snip] +Storage> yandex Yandex Client Id - leave blank normally. client_id> Yandex Client Secret - leave blank normally. @@ -11318,7 +12050,7 @@ y/e/d> y
    rclone ls remote:directory

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    rclone sync /home/local/directory remote:directory
    -

    Modified time

    +

    Modified time

    Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

    MD5 checksums

    MD5 checksums are natively supported by Yandex Disk.

    @@ -11326,10 +12058,10 @@ y/e/d> y

    If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

    -

    Limitations

    +

    Limitations

    When uploading very large files (bigger than about 5GB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you’ll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to yandex (Yandex Disk).

    –yandex-client-id

    Yandex Client Id Leave blank normally.

    @@ -11347,7 +12079,7 @@ y/e/d> y
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to yandex (Yandex Disk).

    Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link.

    @@ -11363,7 +12095,7 @@ y/e/d> y
    rclone sync /home/source /tmp/destination

    Will sync /home/source to /tmp/destination

    These can be configured into the config file for consistencies sake, but it is probably easier not to.

    -

    Modified time

    +

    Modified time

    Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

    Filenames

    Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.

    @@ -11466,7 +12198,7 @@ $ tree /tmp/b

    NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.

    NB This flag is only available on Unix based systems. On systems where it isn’t supported (eg Windows) it will be ignored.

    -

    Standard Options

    +

    Standard Options

    Here are the standard options specific to local (Local Disk).

    –local-nounc

    Disable UNC (long path names) conversion on Windows

    @@ -11483,7 +12215,7 @@ $ tree /tmp/b -

    Advanced Options

    +

    Advanced Options

    Here are the advanced options specific to local (Local Disk).

    Follow symlinks and copy the pointed to item.

    @@ -11536,8 +12268,204 @@ $ tree /tmp/b
  • Type: bool
  • Default: false
  • +

    –local-case-sensitive

    +

    Force the filesystem to report itself as case sensitive.

    +

    Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.

    + +

    –local-case-insensitive

    +

    Force the filesystem to report itself as case insensitive

    +

    Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.

    +

    Changelog

    +

    v1.49.0 - 2019-08-26

    +

    v1.48.0 - 2019-06-15

  • Bug Fixes
  • @@ -12118,15 +13046,15 @@ $ tree /tmp/b
  • Show URL of backend help page when starting config (Nick Craig-Wood)
  • stats: Long names now split in center (Joanna Marek)
  • -
  • Add –log-format flag for more control over log output (dcpu)
  • +
  • Add --log-format flag for more control over log output (dcpu)
  • rc: Add support for OPTIONS and basic CORS (frenos)
  • stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
  • Bug Fixes
  • Mount @@ -12168,7 +13096,7 @@ $ tree /tmp/b
  • Azure Blob
  • @@ -12178,7 +13106,7 @@ $ tree /tmp/b
  • Drive
  • @@ -12201,8 +13129,8 @@ $ tree /tmp/b
  • Jottacloud
  • WebDAV

    Contact the rclone project

    Forum

    @@ -14668,6 +15624,6 @@ THE SOFTWARE.
  • [@njcw](https://twitter.com/njcw)
  • Email

    -

    Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood

    +

    Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don’t email me requests for help - those are better directed to the forum - thanks!

    diff --git a/MANUAL.md b/MANUAL.md index 824bb5a6a..0f8c4f46e 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,26 +1,26 @@ % rclone(1) User Manual % Nick Craig-Wood -% Jun 15, 2019 +% Aug 26, 2019 -Rclone -====== - -[![Logo](https://rclone.org/img/rclone-120x120.png)](https://rclone.org/) +# Rclone - rsync for cloud storage Rclone is a command line program to sync files and directories to and from: +* 1Fichier * Alibaba Cloud (Aliyun) Object Storage System (OSS) * Amazon Drive ([See note](/amazonclouddrive/#status)) * Amazon S3 * Backblaze B2 * Box * Ceph +* C14 * DigitalOcean Spaces * Dreamhost * Dropbox * FTP * Google Cloud Storage * Google Drive +* Google Photos * HTTP * Hubic * Jottacloud @@ -38,6 +38,7 @@ Rclone is a command line program to sync files and directories to and from: * Oracle Cloud Storage * ownCloud * pCloud +* premiumize.me * put.io * QingStor * Rackspace Cloud Files @@ -64,6 +65,7 @@ Features * Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/)) * Multi-threaded downloads to local disk * Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over [HTTP](https://rclone.org/commands/rclone_serve_http/)/[WebDav](https://rclone.org/commands/rclone_serve_webdav/)/[FTP](https://rclone.org/commands/rclone_serve_ftp/)/[SFTP](https://rclone.org/commands/rclone_serve_sftp/)/[dlna](https://rclone.org/commands/rclone_serve_dlna/) + * Experimental [Web based GUI](https://rclone.org/gui/) Links @@ -202,6 +204,7 @@ option: See the following for detailed instructions for + * [1Fichier](https://rclone.org/fichier/) * [Alias](https://rclone.org/alias/) * [Amazon Drive](https://rclone.org/amazonclouddrive/) * [Amazon S3](https://rclone.org/s3/) @@ -214,6 +217,7 @@ See the following for detailed instructions for * [FTP](https://rclone.org/ftp/) * [Google Cloud Storage](https://rclone.org/googlecloudstorage/) * [Google Drive](https://rclone.org/drive/) + * [Google Photos](https://rclone.org/googlephotos/) * [HTTP](https://rclone.org/http/) * [Hubic](https://rclone.org/hubic/) * [Jottacloud](https://rclone.org/jottacloud/) @@ -224,6 +228,8 @@ See the following for detailed instructions for * [Openstack Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/) * [OpenDrive](https://rclone.org/opendrive/) * [Pcloud](https://rclone.org/pcloud/) + * [premiumize.me](https://rclone.org/premiumizeme/) + * [put.io](https://rclone.org/putio/) * [QingStor](https://rclone.org/qingstor/) * [SFTP](https://rclone.org/sftp/) * [Union](https://rclone.org/union/) @@ -276,20 +282,23 @@ rclone config [flags] -h, --help help for config ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone config create](https://rclone.org/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](https://rclone.org/commands/rclone_config_delete/) - Delete an existing remote . +* [rclone config disconnect](https://rclone.org/commands/rclone_config_disconnect/) - Disconnects user from remote * [rclone config dump](https://rclone.org/commands/rclone_config_dump/) - Dump the config file as JSON. * [rclone config edit](https://rclone.org/commands/rclone_config_edit/) - Enter an interactive configuration session. * [rclone config file](https://rclone.org/commands/rclone_config_file/) - Show path of configuration file in use. * [rclone config password](https://rclone.org/commands/rclone_config_password/) - Update password in an existing remote. * [rclone config providers](https://rclone.org/commands/rclone_config_providers/) - List in JSON format all the providers and options. +* [rclone config reconnect](https://rclone.org/commands/rclone_config_reconnect/) - Re-authenticates user with remote. * [rclone config show](https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config update](https://rclone.org/commands/rclone_config_update/) - Update options in an existing remote. - -###### Auto generated by spf13/cobra on 15-Jun-2019 +* [rclone config userinfo](https://rclone.org/commands/rclone_config_userinfo/) - Prints info about logged in user of remote. ## rclone copy @@ -359,12 +368,12 @@ rclone copy source:path dest:path [flags] -h, --help help for copy ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone sync Make source and dest identical, modifying destination only. @@ -405,12 +414,12 @@ rclone sync source:path dest:path [flags] -h, --help help for sync ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone move Move files from source to dest. @@ -457,12 +466,12 @@ rclone move source:path dest:path [flags] -h, --help help for move ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone delete Remove the contents of path. @@ -502,12 +511,12 @@ rclone delete remote:path [flags] -h, --help help for delete ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone purge Remove the path and all of its contents. @@ -530,12 +539,12 @@ rclone purge remote:path [flags] -h, --help help for purge ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone mkdir Make the path if it doesn't already exist. @@ -554,12 +563,12 @@ rclone mkdir remote:path [flags] -h, --help help for mkdir ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone rmdir Remove the path if empty. @@ -580,12 +589,12 @@ rclone rmdir remote:path [flags] -h, --help help for rmdir ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone check Checks the files in the source and destination match. @@ -622,12 +631,12 @@ rclone check source:path dest:path [flags] --one-way Check one way only, source files must exist on remote ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone ls List the objects in the path with size and path. @@ -680,12 +689,12 @@ rclone ls remote:path [flags] -h, --help help for ls ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone lsd List all directories/containers/buckets in the path. @@ -749,12 +758,12 @@ rclone lsd remote:path [flags] -R, --recursive Recurse into the listing. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone lsl List the objects in path with modification time, size and path. @@ -807,12 +816,12 @@ rclone lsl remote:path [flags] -h, --help help for lsl ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone md5sum Produces an md5sum file for all the objects in the path. @@ -834,12 +843,12 @@ rclone md5sum remote:path [flags] -h, --help help for md5sum ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone sha1sum Produces an sha1sum file for all the objects in the path. @@ -861,12 +870,12 @@ rclone sha1sum remote:path [flags] -h, --help help for sha1sum ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone size Prints the total size and number of objects in remote:path. @@ -886,12 +895,12 @@ rclone size remote:path [flags] --json format output as JSON ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone version Show the version number. @@ -938,12 +947,12 @@ rclone version [flags] -h, --help help for version ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone cleanup Clean up the remote if possible @@ -965,12 +974,12 @@ rclone cleanup remote:path [flags] -h, --help help for cleanup ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone dedupe Interactively find duplicate files and delete/rename them. @@ -1070,12 +1079,12 @@ rclone dedupe [mode] remote:path [flags] -h, --help help for dedupe ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone about Get quota information from the remote. @@ -1138,12 +1147,12 @@ rclone about remote: [flags] --json Format output as JSON ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone authorize Remote authorization. @@ -1165,12 +1174,12 @@ rclone authorize [flags] -h, --help help for authorize ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone cachestats Print cache stats for a remote @@ -1191,12 +1200,12 @@ rclone cachestats source: [flags] -h, --help help for cachestats ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone cat Concatenates any files and sends them to stdout. @@ -1239,12 +1248,12 @@ rclone cat remote:path [flags] --tail int Only print the last N characters. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone config create Create a new remote with name, type and options. @@ -1283,12 +1292,12 @@ rclone config create [ ]* [flags] -h, --help help for create ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone config delete Delete an existing remote . @@ -1307,11 +1316,41 @@ rclone config delete [flags] -h, --help help for delete ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 +## rclone config disconnect + +Disconnects user from remote + +### Synopsis + + +This disconnects the remote: passed in to the cloud storage system. + +This normally means revoking the oauth token. + +To reconnect use "rclone config reconnect". + + +``` +rclone config disconnect remote: [flags] +``` + +### Options + +``` + -h, --help help for disconnect +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +### SEE ALSO + +* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. ## rclone config dump @@ -1331,12 +1370,12 @@ rclone config dump [flags] -h, --help help for dump ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone config edit Enter an interactive configuration session. @@ -1358,12 +1397,12 @@ rclone config edit [flags] -h, --help help for edit ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone config file Show path of configuration file in use. @@ -1382,12 +1421,12 @@ rclone config file [flags] -h, --help help for file ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone config password Update password in an existing remote. @@ -1416,12 +1455,12 @@ rclone config password [ ]+ [flags] -h, --help help for password ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone config providers List in JSON format all the providers and options. @@ -1440,11 +1479,41 @@ rclone config providers [flags] -h, --help help for providers ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 +## rclone config reconnect + +Re-authenticates user with remote. + +### Synopsis + + +This reconnects remote: passed in to the cloud storage system. + +To disconnect the remote use "rclone config disconnect". + +This normally means going through the interactive oauth flow again. + + +``` +rclone config reconnect remote: [flags] +``` + +### Options + +``` + -h, --help help for reconnect +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +### SEE ALSO + +* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. ## rclone config show @@ -1464,12 +1533,12 @@ rclone config show [] [flags] -h, --help help for show ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone config update Update options in an existing remote. @@ -1504,11 +1573,39 @@ rclone config update [ ]+ [flags] -h, --help help for update ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. -###### Auto generated by spf13/cobra on 15-Jun-2019 +## rclone config userinfo + +Prints info about logged in user of remote. + +### Synopsis + + +This prints the details of the person logged in to the cloud storage +system. + + +``` +rclone config userinfo remote: [flags] +``` + +### Options + +``` + -h, --help help for userinfo + --json Format output as JSON +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +### SEE ALSO + +* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. ## rclone copyto @@ -1556,12 +1653,12 @@ rclone copyto source:path dest:path [flags] -h, --help help for copyto ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone copyurl Copy url content to dest. @@ -1583,12 +1680,12 @@ rclone copyurl https://example.com dest:path [flags] -h, --help help for copyurl ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone cryptcheck Cryptcheck checks the integrity of a crypted remote. @@ -1635,12 +1732,12 @@ rclone cryptcheck remote:path cryptedremote:path [flags] --one-way Check one way only, source files must exist on destination ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone cryptdecode Cryptdecode returns unencrypted file names. @@ -1671,12 +1768,12 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags] --reverse Reverse cryptdecode, encrypts filenames ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone dbhashsum Produces a Dropbox hash file for all the objects in the path. @@ -1700,12 +1797,12 @@ rclone dbhashsum remote:path [flags] -h, --help help for dbhashsum ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone deletefile Remove a single file from remote. @@ -1728,12 +1825,12 @@ rclone deletefile remote:path [flags] -h, --help help for deletefile ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone genautocomplete Output completion script for a given shell. @@ -1751,14 +1848,14 @@ Run with --help to list the supported shells. -h, --help help for genautocomplete ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone genautocomplete bash](https://rclone.org/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete zsh](https://rclone.org/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone genautocomplete bash Output bash completion script for rclone. @@ -1792,12 +1889,12 @@ rclone genautocomplete bash [output_file] [flags] -h, --help help for bash ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone genautocomplete](https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone genautocomplete zsh Output zsh completion script for rclone. @@ -1831,12 +1928,12 @@ rclone genautocomplete zsh [output_file] [flags] -h, --help help for zsh ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone genautocomplete](https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone gendocs Output markdown docs for rclone to the directory supplied. @@ -1858,12 +1955,12 @@ rclone gendocs output_directory [flags] -h, --help help for gendocs ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone hashsum Produces an hashsum file for all the objects in the path. @@ -1899,12 +1996,12 @@ rclone hashsum remote:path [flags] -h, --help help for hashsum ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone link Generate public link to file/folder. @@ -1933,12 +2030,12 @@ rclone link remote:path [flags] -h, --help help for link ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone listremotes List all the remotes in the config file. @@ -1962,12 +2059,12 @@ rclone listremotes [flags] --long Show the type as well as names. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone lsf List directories and objects in remote:path formatted for parsing @@ -2112,12 +2209,12 @@ rclone lsf remote:path [flags] -s, --separator string Separator for the items in the format. (default ";") ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone lsjson List directories and objects in the path in JSON format. @@ -2218,12 +2315,12 @@ rclone lsjson remote:path [flags] -R, --recursive Recurse into the listing. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone mount Mount the remote as file system on a mountpoint. @@ -2294,10 +2391,7 @@ applications won't work with their files on an rclone mount without Caching](#file-caching) section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, -Hubic) won't work from the root - you will need to specify a bucket, -or a path within the bucket. So `swift:` won't work whereas -`swift:bucket` will as will `swift:bucket/path`. -None of these support the concept of directories, so empty +Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. @@ -2551,12 +2645,12 @@ rclone mount remote:path /path/to/mountpoint [flags] --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone moveto Move file or directory from source to dest. @@ -2606,12 +2700,12 @@ rclone moveto source:path dest:path [flags] -h, --help help for moveto ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone ncdu Explore a remote with a text based user interface. @@ -2639,6 +2733,7 @@ Here are the keys - press '?' to toggle the help on and off g toggle graph n,s,C sort by name,size,count d delete file/directory + Y display current path ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit @@ -2661,12 +2756,12 @@ rclone ncdu remote:path [flags] -h, --help help for ncdu ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone obscure Obscure password for use in the rclone.conf @@ -2685,12 +2780,12 @@ rclone obscure password [flags] -h, --help help for obscure ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone rc Run a command against a running rclone. @@ -2741,12 +2836,12 @@ rclone rc commands parameter [flags] --user string Username to use to rclone remote control. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone rcat Copies standard input to file on remote. @@ -2787,12 +2882,12 @@ rclone rcat remote:path [flags] -h, --help help for rcat ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone rcd Run rclone listening to remote control commands only. @@ -2821,12 +2916,12 @@ rclone rcd * [flags] -h, --help help for rcd ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone rmdirs Remove empty directories under the path. @@ -2855,12 +2950,12 @@ rclone rmdirs remote:path [flags] --leave-root Do not remove root directory if empty ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone serve Serve a remote over a protocol. @@ -2885,6 +2980,8 @@ rclone serve [opts] [flags] -h, --help help for serve ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. @@ -2895,8 +2992,6 @@ rclone serve [opts] [flags] * [rclone serve sftp](https://rclone.org/commands/rclone_serve_sftp/) - Serve the remote over SFTP. * [rclone serve webdav](https://rclone.org/commands/rclone_serve_webdav/) - Serve remote:path over webdav. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone serve dlna Serve remote:path over DLNA @@ -3090,12 +3185,12 @@ rclone serve dlna remote:path [flags] --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone serve ftp Serve remote:path over FTP. @@ -3257,6 +3352,72 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to --low-level-retries times. +### Auth Proxy + +If you supply the parameter `--auth-proxy /path/to/program` then +rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. This uses a simple +JSON based protocl with input on STDIN and output on STDOUT. + +There is an example program +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. + +The program's job is to take a `user` and `pass` on the input and turn +those into the config for a backend on STDOUT in JSON format. This +config will have any default parameters for the backend added, but it +won't use configuration from environment variables or command line +options - it is the job of the proxy program to make a complete +config. + +This config generated must have this extra parameter +- `_root` - root to use for the backend + +And it may have this parameter +- `_obscure` - comma separated strings for parameters to obscure + +For example the program might take this on STDIN + +``` +{ + "user": "me", + "pass": "mypassword" +} +``` + +And return this on STDOUT + +``` +{ + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" +} +``` + +This would mean that an SFTP backend would be created on the fly for +the `user` and `pass` returned in the output to the host given. Note +that since `_obscure` is set to `pass`, rclone will obscure the `pass` +parameter before creating the backend (which is required for sftp +backends). + +The progam can manipulate the supplied `user` in any way, for example +to make proxy to many different sftp backends, you could make the +`user` be `user@example.com` and then set the `host` to `example.com` +in the output and the user to `user`. For security you'd probably want +to restrict the `host` to a limited list. + +Note that an internal cache is keyed on `user` so only use that for +configuration, don't use `pass`. This also means that if a user's +password is changed the cache will need to expire (which takes 5 mins) +before it takes effect. + +This can be used to build general purpose proxies to any kind of +backend that rclone supports. + ``` rclone serve ftp remote:path [flags] @@ -3266,6 +3427,7 @@ rclone serve ftp remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") + --auth-proxy string A program to use to create the backend from the auth. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) @@ -3290,12 +3452,12 @@ rclone serve ftp remote:path [flags] --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone serve http Serve the remote over HTTP. @@ -3331,6 +3493,14 @@ for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +--baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used --baseurl "/rclone" then +rclone would serve from a URL starting with "/rclone/". This is +useful if you wish to proxy rclone serve. Rclone automatically +inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. + #### Authentication By default this will serve files without needing a login. @@ -3507,6 +3677,7 @@ rclone serve http remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) @@ -3537,12 +3708,12 @@ rclone serve http remote:path [flags] --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone serve restic Serve the remote for restic's REST API. @@ -3644,6 +3815,14 @@ for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +--baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used --baseurl "/rclone" then +rclone would serve from a URL starting with "/rclone/". This is +useful if you wish to proxy rclone serve. Rclone automatically +inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. + #### Authentication By default this will serve files without needing a login. @@ -3687,6 +3866,7 @@ rclone serve restic remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --append-only disallow deletion of repository data + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic @@ -3702,12 +3882,12 @@ rclone serve restic remote:path [flags] --user string User name for authentication. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone serve sftp Serve the remote over SFTP. @@ -3880,6 +4060,72 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to --low-level-retries times. +### Auth Proxy + +If you supply the parameter `--auth-proxy /path/to/program` then +rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. This uses a simple +JSON based protocl with input on STDIN and output on STDOUT. + +There is an example program +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. + +The program's job is to take a `user` and `pass` on the input and turn +those into the config for a backend on STDOUT in JSON format. This +config will have any default parameters for the backend added, but it +won't use configuration from environment variables or command line +options - it is the job of the proxy program to make a complete +config. + +This config generated must have this extra parameter +- `_root` - root to use for the backend + +And it may have this parameter +- `_obscure` - comma separated strings for parameters to obscure + +For example the program might take this on STDIN + +``` +{ + "user": "me", + "pass": "mypassword" +} +``` + +And return this on STDOUT + +``` +{ + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" +} +``` + +This would mean that an SFTP backend would be created on the fly for +the `user` and `pass` returned in the output to the host given. Note +that since `_obscure` is set to `pass`, rclone will obscure the `pass` +parameter before creating the backend (which is required for sftp +backends). + +The progam can manipulate the supplied `user` in any way, for example +to make proxy to many different sftp backends, you could make the +`user` be `user@example.com` and then set the `host` to `example.com` +in the output and the user to `user`. For security you'd probably want +to restrict the `host` to a limited list. + +Note that an internal cache is keyed on `user` so only use that for +configuration, don't use `pass`. This also means that if a user's +password is changed the cache will need to expire (which takes 5 mins) +before it takes effect. + +This can be used to build general purpose proxies to any kind of +backend that rclone supports. + ``` rclone serve sftp remote:path [flags] @@ -3889,6 +4135,7 @@ rclone serve sftp remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022") + --auth-proxy string A program to use to create the backend from the auth. --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) @@ -3914,12 +4161,12 @@ rclone serve sftp remote:path [flags] --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone serve webdav Serve remote:path over webdav. @@ -3963,6 +4210,14 @@ for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +--baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used --baseurl "/rclone" then +rclone would serve from a URL starting with "/rclone/". This is +useful if you wish to proxy rclone serve. Rclone automatically +inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. + #### Authentication By default this will serve files without needing a login. @@ -4130,6 +4385,72 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to --low-level-retries times. +### Auth Proxy + +If you supply the parameter `--auth-proxy /path/to/program` then +rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. This uses a simple +JSON based protocl with input on STDIN and output on STDOUT. + +There is an example program +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. + +The program's job is to take a `user` and `pass` on the input and turn +those into the config for a backend on STDOUT in JSON format. This +config will have any default parameters for the backend added, but it +won't use configuration from environment variables or command line +options - it is the job of the proxy program to make a complete +config. + +This config generated must have this extra parameter +- `_root` - root to use for the backend + +And it may have this parameter +- `_obscure` - comma separated strings for parameters to obscure + +For example the program might take this on STDIN + +``` +{ + "user": "me", + "pass": "mypassword" +} +``` + +And return this on STDOUT + +``` +{ + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" +} +``` + +This would mean that an SFTP backend would be created on the fly for +the `user` and `pass` returned in the output to the host given. Note +that since `_obscure` is set to `pass`, rclone will obscure the `pass` +parameter before creating the backend (which is required for sftp +backends). + +The progam can manipulate the supplied `user` in any way, for example +to make proxy to many different sftp backends, you could make the +`user` be `user@example.com` and then set the `host` to `example.com` +in the output and the user to `user`. For security you'd probably want +to restrict the `host` to a limited list. + +Note that an internal cache is keyed on `user` so only use that for +configuration, don't use `pass`. This also means that if a user's +password is changed the cache will need to expire (which takes 5 mins) +before it takes effect. + +This can be used to build general purpose proxies to any kind of +backend that rclone supports. + ``` rclone serve webdav remote:path [flags] @@ -4139,6 +4460,8 @@ rclone serve webdav remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --auth-proxy string A program to use to create the backend from the auth. + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) @@ -4171,12 +4494,12 @@ rclone serve webdav remote:path [flags] --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone settier Changes storage class/tier of objects in remote. @@ -4217,12 +4540,12 @@ rclone settier tier remote:path [flags] -h, --help help for settier ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone touch Create new file or change file modification time. @@ -4243,12 +4566,12 @@ rclone touch remote:path [flags] -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05) ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - ## rclone tree List the contents of the remote in a tree like fashion. @@ -4310,12 +4633,12 @@ rclone tree remote:path [flags] --version Sort files alphanumerically by version. ``` +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + ### SEE ALSO * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. -###### Auto generated by spf13/cobra on 15-Jun-2019 - Copying single files -------------------- @@ -4536,6 +4859,8 @@ If running rclone from a script you might want to use today's date as the directory name passed to `--backup-dir` to store the old files, or you might want to pass `--suffix` with today's date. +See `--compare-dest` and `--copy-dest`. + ### --bind string ### Local address to bind to for outgoing connections. This can be an @@ -4655,6 +4980,18 @@ quicker than without the `--checksum` flag. When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally. +### --compare-dest=DIR ### + +When using `sync`, `copy` or `move` DIR is checked in addition to the +destination for files. If a file identical to the source is found that +file is NOT copied from source. This is useful to copy just files that +have changed since the last backup. + +You must use the same remote as the destination of the sync. The +compare directory must not overlap the destination directory. + +See `--copy-dest` and `--backup-dir`. + ### --config=CONFIG_FILE ### Specify the location of the rclone config file. @@ -4683,6 +5020,19 @@ The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is `1m` by default. +### --copy-dest=DIR ### + +When using `sync`, `copy` or `move` DIR is checked in addition to the +destination for files. If a file identical to the source is found that +file is server side copied from DIR to the destination. This is useful +for incremental backup. + +The remote in use must support server side copy and you must +use the same remote as the destination of the sync. The compare +directory must not overlap the destination directory. + +See `--compare-dest` and `--backup-dir`. + ### --dedupe-mode MODE ### Mode to run dedupe command in. One of `interactive`, `skip`, `first`, `newest`, `oldest`, `rename`. The default is `interactive`. See the dedupe command for more information as to what these options mean. @@ -4816,6 +5166,11 @@ warnings and significant events. `ERROR` is equivalent to `-q`. It only outputs error messages. +### --use-json-log ### + +This switches the log format to JSON for rclone. The fields of json log +are level, msg, source, time. + ### --low-level-retries NUMBER ### This controls the number of low level retries rclone does. @@ -4918,6 +5273,10 @@ above. **NB** that this **only** works for a local destination but will work with any source. +**NB** that multi thread copies are disabled for local to local copies +as they are faster without unless `--multi-thread-streams` is set +explicitly. + ### --multi-thread-streams=N ### When using multi thread downloads (see above `--multi-thread-cutoff`) @@ -5090,11 +5449,23 @@ The default is `bytes`. ### --suffix=SUFFIX ### -This is for use with `--backup-dir` only. If this isn't set then -`--backup-dir` will move files with their original name. If it is set -then the files will have SUFFIX added on to them. +When using `sync`, `copy` or `move` any files which would have been +overwritten or deleted will have the suffix added to them. If there +is a file with the same path (after the suffix has been added), then +it will be overwritten. -See `--backup-dir` for more info. +The remote in use must support server side move or copy and you must +use the same remote as the destination of the sync. + +This is for use with files to add the suffix in the current directory +or with `--backup-dir`. See `--backup-dir` for more info. + +For example + + rclone sync /path/to/local/file remote:current --suffix .bak + +will sync `/path/to/local` to `remote:current`, but for any files +which would have been updated or deleted have .bak added. ### --suffix-keep-extension ### @@ -5257,15 +5628,16 @@ If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. -On remotes which don't support mod time directly the time checked will -be the uploaded time. This means that if uploading to one of these -remotes, rclone will skip any files which exist on the destination and -have an uploaded time that is newer than the modification time of the -source file. +On remotes which don't support mod time directly (or when using +`--use-server-mod-time`) the time checked will be the uploaded time. +This means that if uploading to one of these remotes, rclone will skip +any files which exist on the destination and have an uploaded time that +is newer than the modification time of the source file. This can be useful when transferring to a remote which doesn't support -mod times directly as it is more accurate than a `--size-only` check -and faster than using `--checksum`. +mod times directly (or when using `--use-server-mod-time` to avoid extra +API calls) as it is more accurate than a `--size-only` check and faster +than using `--checksum`. ### --use-mmap ### @@ -5290,10 +5662,14 @@ additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation. Use this flag to disable the extra API call and rely instead on the server's -modified time. In cases such as a local to remote sync, knowing the local file -is newer than the time it was last uploaded to the remote is sufficient. In -those cases, this flag can speed up the process and reduce the number of API -calls necessary. +modified time. In cases such as a local to remote sync using `--update`, +knowing the local file is newer than the time it was last uploaded to the +remote is sufficient. In those cases, this flag can speed up the process and +reduce the number of API calls necessary. + +Using this flag on a sync operation without also using `--update` would cause +all files modified at any time other than the last upload time to be uploaded +again, which is probably not what you want. ### -v, -vv, --verbose ### @@ -6071,7 +6447,7 @@ You could then use it like this: This will transfer these files only (if they exist) /home/me/pics/file1.jpg → remote:pics/file1.jpg - /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg + /home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths: @@ -6097,7 +6473,7 @@ The 3 files will arrive in `remote:backup` with the paths as in the /home/user1/important → remote:backup/user1/important /home/user1/dir/file → remote:backup/user1/dir/file - /home/user2/stuff → remote:backup/stuff + /home/user2/stuff → remote:backup/user2/stuff You could of course choose `/` as the root too in which case your `files-from.txt` might look like this. @@ -6112,9 +6488,9 @@ And you would transfer it like this In this case there will be an extra `home` directory on the remote: - /home/user1/important → remote:home/backup/user1/important - /home/user1/dir/file → remote:home/backup/user1/dir/file - /home/user2/stuff → remote:home/backup/stuff + /home/user1/important → remote:backup/home/user1/important + /home/user1/dir/file → remote:backup/home/user1/dir/file + /home/user2/stuff → remote:backup/home/user2/stuff ### `--min-size` - Don't transfer any file smaller than this ### @@ -6231,6 +6607,105 @@ You can exclude `dir3` from sync by running the following command: Currently only one filename is supported, i.e. `--exclude-if-present` should not be used multiple times. +# GUI (Experimental) + +Rclone can serve a web based GUI (graphical user interface). This is +somewhat experimental at the moment so things may be subject to +change. + +Run this command in a terminal and rclone will download and then +display the GUI in a web browser. + +``` +rclone rcd --rc-web-gui +``` + +This will produce logs like this and rclone needs to continue to run to serve the GUI: + +``` +2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip +2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip] +2019/08/25 11:40:16 NOTICE: Unzipping +2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/ +``` + +This assumes you are running rclone locally on your machine. It is +possible to separate the rclone and the GUI - see below for details. + +If you wish to update to the latest API version then you can add +`--rc-web-gui-update` to the command line. + +## Using the GUI + +Once the GUI opens, you will be looking at the dashboard which has an overall overview. + +On the left hand side you will see a series of view buttons you can click on: + +- Dashboard - main overview +- Configs - examine and create new configurations +- Explorer - view, download and upload files to the cloud storage systems +- Backend - view or alter the backend config +- Log out + +(More docs and walkthrough video to come!) + +## How it works + +When you run the `rclone rcd --rc-web-gui` this is what happens + +- Rclone starts but only runs the remote control API ("rc"). +- The API is bound to localhost with an auto generated username and password. +- If the API bundle is missing then rclone will download it. +- rclone will start serving the files from the API bundle over the same port as the API +- rclone will open the browser with a `login_token` so it can log straight in. + +## Advanced use + +The `rclone rcd` may use any of the [flags documented on the rc page](https://rclone.org/rc/#supported-parameters). + +The flag `--rc-web-gui` is shorthand for + +- Download the web GUI if necessary +- Check we are using some authentication +- `--rc-user gui` +- `--rc-pass ` +- `--rc-serve` + +These flags can be overidden as desired. + +See also the [rclone rcd documentation](https://rclone.org/commands/rclone_rcd/). + +### Example: Running a public GUI + +For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags: + +- `--rc-web-gui` +- `--rc-addr :443` +- `--rc-htpasswd /path/to/htpasswd` +- `--rc-cert /path/to/ssl.crt` +- `--rc-key /path/to/ssl.key` + +### Example: Running a GUI behind a proxy + +If you want to run the GUI behind a proxy at `/rclone` you could use these flags: + +- `--rc-web-gui` +- `--rc-baseurl rclone` +- `--rc-htpasswd /path/to/htpasswd` + +Or instead of htpassword if you just want a single user and password: + +- `--rc-user me` +- `--rc-pass mypassword` + +## Project + +The GUI is being developed in the: [rclone/rclone-webui-react respository](https://github.com/rclone/rclone-webui-react). + +Bug reports and contributions very welcome welcome :-) + +If you have questions then please ask them on the [rclone forum](https://forum.rclone.org/). + # Remote controlling rclone # If rclone is run with the `--rc` flag then it starts an http server @@ -6312,6 +6787,32 @@ style. Default Off. +### --rc-web-gui + +Set this flag to serve the default web gui on the same port as rclone. + +Default Off. + +### --rc-allow-origin + +Set the allowed Access-Control-Allow-Origin for rc requests. + +Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui. + +Default is IP address on which rc is running. + +### --rc-web-fetch-url + +Set the URL to fetch the rclone-web-gui files from. + +Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest. + +### --rc-web-gui-update + +Set this flag to Download / Force update rclone-webui-react from the rc-web-fetch-url. + +Default Off. + ### --rc-job-expire-duration=DURATION Expire finished async jobs older than DURATION (default 60s). @@ -6377,6 +6878,9 @@ The rc interface supports some special parameters which apply to ### Running asynchronous jobs with _async = true +Each rc call is classified as a job and it is assigned its own id. By default +jobs are executed immediately as they are created or synchronously. + If `_async` has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The `job/status` call can be used to get information of @@ -6437,9 +6941,28 @@ $ rclone rc job/list } ``` +### Assigning operations to groups with _group = + +Each rc call has it's own stats group for tracking it's metrics. By default +grouping is done by the composite group name from prefix `job/` and id of the +job like so `job/1`. + +If `_group` has a value then stats for that request will be grouped under that +value. This allows caller to group stats under their own name. + +Stats for specific group can be accessed by passing `group` to `core/stats`: + +``` +$ rclone rc --json '{ "group": "job/1" }' core/stats +{ + "speed": 12345 + ... +} +``` + ## Supported commands -### cache/expire: Purge a remote from cache +### cache/expire: Purge a remote from cache {#cache/expire} Purge a remote from the cache backend. Supports either a directory or a file. Params: @@ -6451,7 +6974,7 @@ Eg rclone rc cache/expire remote=path/to/sub/folder/ rclone rc cache/expire remote=/ withData=true -### cache/fetch: Fetch file chunks +### cache/fetch: Fetch file chunks {#cache/fetch} Ensure the specified file chunks are cached on disk. @@ -6478,11 +7001,11 @@ specify files to fetch, eg File names will automatically be encrypted when the a crypt remote is used on top of the cache. -### cache/stats: Get cache stats +### cache/stats: Get cache stats {#cache/stats} Show statistics for the cache remote. -### config/create: create the config for a remote. +### config/create: create the config for a remote. {#config/create} This takes the following parameters @@ -6494,7 +7017,7 @@ See the [config create command](https://rclone.org/commands/rclone_config_create Authentication is required for this call. -### config/delete: Delete a remote in the config file. +### config/delete: Delete a remote in the config file. {#config/delete} Parameters: - name - name of remote to delete @@ -6503,7 +7026,7 @@ See the [config delete command](https://rclone.org/commands/rclone_config_delete Authentication is required for this call. -### config/dump: Dumps the config file. +### config/dump: Dumps the config file. {#config/dump} Returns a JSON object: - key: value @@ -6514,7 +7037,7 @@ See the [config dump command](https://rclone.org/commands/rclone_config_dump/) c Authentication is required for this call. -### config/get: Get a remote in the config file. +### config/get: Get a remote in the config file. {#config/get} Parameters: - name - name of remote to get @@ -6523,7 +7046,7 @@ See the [config dump command](https://rclone.org/commands/rclone_config_dump/) c Authentication is required for this call. -### config/listremotes: Lists the remotes in the config file. +### config/listremotes: Lists the remotes in the config file. {#config/listremotes} Returns - remotes - array of remote names @@ -6532,7 +7055,7 @@ See the [listremotes command](https://rclone.org/commands/rclone_listremotes/) c Authentication is required for this call. -### config/password: password the config for a remote. +### config/password: password the config for a remote. {#config/password} This takes the following parameters @@ -6543,7 +7066,7 @@ See the [config password command](https://rclone.org/commands/rclone_config_pass Authentication is required for this call. -### config/providers: Shows how providers are configured in the config file. +### config/providers: Shows how providers are configured in the config file. {#config/providers} Returns a JSON object: - providers - array of objects @@ -6552,7 +7075,7 @@ See the [config providers command](https://rclone.org/commands/rclone_config_pro Authentication is required for this call. -### config/update: update the config for a remote. +### config/update: update the config for a remote. {#config/update} This takes the following parameters @@ -6563,25 +7086,60 @@ See the [config update command](https://rclone.org/commands/rclone_config_update Authentication is required for this call. -### core/bwlimit: Set the bandwidth limit. +### core/bwlimit: Set the bandwidth limit. {#core/bwlimit} This sets the bandwidth limit to that passed in. Eg - rclone rc core/bwlimit rate=1M rclone rc core/bwlimit rate=off + { + "bytesPerSecond": -1, + "rate": "off" + } + rclone rc core/bwlimit rate=1M + { + "bytesPerSecond": 1048576, + "rate": "1M" + } + + +If the rate parameter is not suppied then the bandwidth is queried + + rclone rc core/bwlimit + { + "bytesPerSecond": 1048576, + "rate": "1M" + } The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified. -### core/gc: Runs a garbage collection. +In either case "rate" is returned as a human readable string, and +"bytesPerSecond" is returned as a number. + +### core/gc: Runs a garbage collection. {#core/gc} This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems. -### core/memstats: Returns the memory statistics +### core/group-list: Returns list of stats. {#core/group-list} + +This returns list of stats groups currently in memory. + +Returns the following values: +``` +{ + "groups": an array of group names: + [ + "group1", + "group2", + ... + ] +} + +### core/memstats: Returns the memory statistics {#core/memstats} This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats @@ -6593,7 +7151,7 @@ The most interesting values for most people are: * Sys: this is the total amount of memory requested from the OS * It is virtual memory so may include unused memory -### core/obscure: Obscures a string passed in. +### core/obscure: Obscures a string passed in. {#core/obscure} Pass a clear string and rclone will obscure it for the config file: - clear - string @@ -6601,17 +7159,23 @@ Pass a clear string and rclone will obscure it for the config file: Returns - obscured - string -### core/pid: Return PID of current process +### core/pid: Return PID of current process {#core/pid} This returns PID of current process. Useful for stopping rclone process. -### core/stats: Returns stats about current transfers. +### core/stats: Returns stats about current transfers. {#core/stats} -This returns all available stats +This returns all available stats: rclone rc core/stats +If group is not provided then summed up stats for all groups will be +returned. + +Parameters +- group - name of the stats group (string) + Returns the following values: ``` @@ -6645,7 +7209,44 @@ Returns the following values: Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined. -### core/version: Shows the current version of rclone and the go runtime. +### core/stats-reset: Reset stats. {#core/stats-reset} + +This clears counters and errors for all stats or specific stats group if group +is provided. + +Parameters +- group - name of the stats group (string) + +### core/transferred: Returns stats about completed transfers. {#core/transferred} + +This returns stats about completed transfers: + + rclone rc core/transferred + +If group is not provided then completed transfers for all groups will be +returned. + +Parameters +- group - name of the stats group (string) + +Returns the following values: +``` +{ + "transferred": an array of completed transfers (including failed ones): + [ + { + "name": name of the file, + "size": size of the file in bytes, + "bytes": total transferred bytes for this file, + "checked": if the transfer is only checked (skipped, deleted), + "timestamp": integer representing millisecond unix epoch, + "error": string description of the error (empty if successfull), + "jobid": id of the job that this transfer belongs to + } + ] +} + +### core/version: Shows the current version of rclone and the go runtime. {#core/version} This shows the current version of go and the go runtime - version - rclone version, eg "v1.44" @@ -6656,14 +7257,14 @@ This shows the current version of go and the go runtime - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use -### job/list: Lists the IDs of the running jobs +### job/list: Lists the IDs of the running jobs {#job/list} Parameters - None Results - jobids - array of integer job ids -### job/status: Reads the status of the job ID +### job/status: Reads the status of the job ID {#job/status} Parameters - jobid - id of the job (integer) @@ -6678,8 +7279,14 @@ Results - startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously +- progress - output of the progress related to the underlying job -### operations/about: Return the space used on the remote +### job/stop: Stop the running job {#job/stop} + +Parameters +- jobid - id of the job (integer) + +### operations/about: Return the space used on the remote {#operations/about} This takes the following parameters @@ -6691,7 +7298,7 @@ See the [about command](https://rclone.org/commands/rclone_size/) command for mo Authentication is required for this call. -### operations/cleanup: Remove trashed files in the remote or path +### operations/cleanup: Remove trashed files in the remote or path {#operations/cleanup} This takes the following parameters @@ -6701,7 +7308,7 @@ See the [cleanup command](https://rclone.org/commands/rclone_cleanup/) command f Authentication is required for this call. -### operations/copyfile: Copy a file from source remote to destination remote +### operations/copyfile: Copy a file from source remote to destination remote {#operations/copyfile} This takes the following parameters @@ -6712,7 +7319,7 @@ This takes the following parameters Authentication is required for this call. -### operations/copyurl: Copy the URL to the object +### operations/copyurl: Copy the URL to the object {#operations/copyurl} This takes the following parameters @@ -6724,7 +7331,7 @@ See the [copyurl command](https://rclone.org/commands/rclone_copyurl/) command f Authentication is required for this call. -### operations/delete: Remove files in the path +### operations/delete: Remove files in the path {#operations/delete} This takes the following parameters @@ -6734,7 +7341,7 @@ See the [delete command](https://rclone.org/commands/rclone_delete/) command for Authentication is required for this call. -### operations/deletefile: Remove the single file pointed to +### operations/deletefile: Remove the single file pointed to {#operations/deletefile} This takes the following parameters @@ -6745,7 +7352,7 @@ See the [deletefile command](https://rclone.org/commands/rclone_deletefile/) com Authentication is required for this call. -### operations/fsinfo: Return information about the remote +### operations/fsinfo: Return information about the remote {#operations/fsinfo} This takes the following parameters @@ -6802,7 +7409,7 @@ This command does not have a command line equivalent so use this instead: rclone rc --loopback operations/fsinfo fs=remote: -### operations/list: List the given remote and path in JSON format +### operations/list: List the given remote and path in JSON format {#operations/list} This takes the following parameters @@ -6824,7 +7431,7 @@ See the [lsjson command](https://rclone.org/commands/rclone_lsjson/) for more in Authentication is required for this call. -### operations/mkdir: Make a destination directory or container +### operations/mkdir: Make a destination directory or container {#operations/mkdir} This takes the following parameters @@ -6835,7 +7442,7 @@ See the [mkdir command](https://rclone.org/commands/rclone_mkdir/) command for m Authentication is required for this call. -### operations/movefile: Move a file from source remote to destination remote +### operations/movefile: Move a file from source remote to destination remote {#operations/movefile} This takes the following parameters @@ -6846,7 +7453,7 @@ This takes the following parameters Authentication is required for this call. -### operations/publiclink: Create or retrieve a public link to the given file or folder. +### operations/publiclink: Create or retrieve a public link to the given file or folder. {#operations/publiclink} This takes the following parameters @@ -6861,7 +7468,7 @@ See the [link command](https://rclone.org/commands/rclone_link/) command for mor Authentication is required for this call. -### operations/purge: Remove a directory or container and all of its contents +### operations/purge: Remove a directory or container and all of its contents {#operations/purge} This takes the following parameters @@ -6872,7 +7479,7 @@ See the [purge command](https://rclone.org/commands/rclone_purge/) command for m Authentication is required for this call. -### operations/rmdir: Remove an empty directory or container +### operations/rmdir: Remove an empty directory or container {#operations/rmdir} This takes the following parameters @@ -6883,7 +7490,7 @@ See the [rmdir command](https://rclone.org/commands/rclone_rmdir/) command for m Authentication is required for this call. -### operations/rmdirs: Remove all the empty directories in the path +### operations/rmdirs: Remove all the empty directories in the path {#operations/rmdirs} This takes the following parameters @@ -6895,7 +7502,7 @@ See the [rmdirs command](https://rclone.org/commands/rclone_rmdirs/) command for Authentication is required for this call. -### operations/size: Count the number of bytes and files in remote +### operations/size: Count the number of bytes and files in remote {#operations/size} This takes the following parameters @@ -6910,12 +7517,12 @@ See the [size command](https://rclone.org/commands/rclone_size/) command for mor Authentication is required for this call. -### options/blocks: List all the option blocks +### options/blocks: List all the option blocks {#options/blocks} Returns - options - a list of the options block names -### options/get: Get all the options +### options/get: Get all the options {#options/get} Returns an object where keys are option block names and values are an object with the current option values in. @@ -6923,7 +7530,7 @@ object with the current option values in. This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. -### options/set: Set an option +### options/set: Set an option {#options/set} Parameters @@ -6950,23 +7557,23 @@ And this sets NOTICE level logs (normal without -v) rclone rc options/set --json '{"main": {"LogLevel": 6}}' -### rc/error: This returns an error +### rc/error: This returns an error {#rc/error} This returns an error with the input as part of its error string. Useful for testing error handling. -### rc/list: List all the registered remote control commands +### rc/list: List all the registered remote control commands {#rc/list} This lists all the registered remote control commands as a JSON map in the commands response. -### rc/noop: Echo the input to the output parameters +### rc/noop: Echo the input to the output parameters {#rc/noop} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. -### rc/noopauth: Echo the input to the output parameters requiring auth +### rc/noopauth: Echo the input to the output parameters requiring auth {#rc/noopauth} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to @@ -6974,7 +7581,7 @@ check that parameter passing is working properly. Authentication is required for this call. -### sync/copy: copy a directory from source remote to destination remote +### sync/copy: copy a directory from source remote to destination remote {#sync/copy} This takes the following parameters @@ -6986,7 +7593,7 @@ See the [copy command](https://rclone.org/commands/rclone_copy/) command for mor Authentication is required for this call. -### sync/move: move a directory from source remote to destination remote +### sync/move: move a directory from source remote to destination remote {#sync/move} This takes the following parameters @@ -6999,7 +7606,7 @@ See the [move command](https://rclone.org/commands/rclone_move/) command for mor Authentication is required for this call. -### sync/sync: sync a directory from source remote to destination remote +### sync/sync: sync a directory from source remote to destination remote {#sync/sync} This takes the following parameters @@ -7011,7 +7618,7 @@ See the [sync command](https://rclone.org/commands/rclone_sync/) command for mor Authentication is required for this call. -### vfs/forget: Forget files or directories in the directory cache. +### vfs/forget: Forget files or directories in the directory cache. {#vfs/forget} This forgets the paths in the directory cache causing them to be re-read from the remote when needed. @@ -7027,7 +7634,7 @@ starting with dir will forget that dir, eg rclone rc vfs/forget file=hello file2=goodbye dir=home/junk -### vfs/poll-interval: Get the status or update the value of the poll-interval option. +### vfs/poll-interval: Get the status or update the value of the poll-interval option. {#vfs/poll-interval} Without any parameter given this returns the current status of the poll-interval setting. @@ -7049,7 +7656,7 @@ If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote. -### vfs/refresh: Refresh the directory cache. +### vfs/refresh: Refresh the directory cache. {#vfs/refresh} This reads the directories for the specified paths and freshens the directory cache. @@ -7293,6 +7900,7 @@ Here is an overview of the major features of each cloud storage system. | Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | | ---------------------------- |:-----------:|:-------:|:----------------:|:---------------:|:---------:| +| 1Fichier | Whirlpool | No | No | Yes | R | | Amazon Drive | MD5 | No | Yes | No | R | | Amazon S3 | MD5 | Yes | No | No | R/W | | Backblaze B2 | SHA1 | Yes | No | No | R/W | @@ -7301,6 +7909,7 @@ Here is an overview of the major features of each cloud storage system. | FTP | - | No | No | No | - | | Google Cloud Storage | MD5 | Yes | No | No | R/W | | Google Drive | MD5 | Yes | No | Yes | R/W | +| Google Photos | - | No | No | Yes | R | | HTTP | - | No | No | No | R | | Hubic | MD5 | Yes | No | No | R/W | | Jottacloud | MD5 | Yes | Yes | No | R/W | @@ -7311,6 +7920,8 @@ Here is an overview of the major features of each cloud storage system. | OpenDrive | MD5 | Yes | Yes | No | - | | Openstack Swift | MD5 | Yes | No | No | R/W | | pCloud | MD5, SHA1 | Yes | No | No | W | +| premiumize.me | - | No | Yes | No | R | +| put.io | CRC-32 | Yes | No | Yes | R | | QingStor | MD5 | No | No | No | R/W | | SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - | | WebDAV | MD5, SHA1 ††| Yes ††† | Depends | No | - | @@ -7405,30 +8016,34 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. -| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | -| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:| -| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| Amazon S3 | No | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| Box | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | Yes | Yes | No | -| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | Yes | Yes | Yes | -| FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -| HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | -| Jottacloud | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | -| Mega | Yes | No | Yes | Yes | Yes | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | -| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| Microsoft OneDrive | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | No | Yes | Yes | -| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | -| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | -| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | -| QingStor | No | Yes | No | No | No | Yes | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | -| SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | -| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | -| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | -| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes | +| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir | +| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:| :------: | +| 1Fichier | No | No | No | No | No | No | No | No | No | Yes | +| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | +| Amazon S3 | No | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | +| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | +| Box | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | Yes | Yes | No | Yes | +| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | Yes | Yes | Yes | Yes | +| FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | +| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | +| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | +| Google Photos | No | No | No | No | No | No | No | No | No | No | +| HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes | +| Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | No | +| Jottacloud | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes | +| Mega | Yes | No | Yes | Yes | Yes | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | +| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | +| Microsoft OneDrive | Yes | Yes | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | No | Yes | Yes | Yes | +| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | +| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | No | +| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | +| premiumize.me | Yes | No | Yes | Yes | No | No | No | Yes | Yes | Yes | +| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | +| QingStor | No | Yes | No | No | No | Yes | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No | +| SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | +| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/rclone/rclone/issues/2178) | Yes | Yes | +| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes | +| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | ### Purge ### @@ -7502,6 +8117,489 @@ This is also used to return the space used, available for `rclone mount`. If the server can't do `About` then `rclone about` will return an error. +### EmptyDir ### + +The remote supports empty directories. See [Limitations](/bugs/#limitations) + for details. Most Object/Bucket based remotes do not support this. + +# Global Flags + +This describes the global flags available to every rclone command +split into two groups, non backend and backend flags. + +## Non Backend Flags + +These flags are available for every command. + +``` + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --ca-cert string CA certificate used to verify servers + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --client-cert string Client SSL certificate (PEM) for mutual TLS auth + --client-key string Client SSL private key (PEM) for mutual TLS auth + --compare-dest string use DIR to server side copy flies from. + --config string Config file. (default "$HOME/.config/rclone/rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --copy-dest string Compare dest to DIR also. + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ignore-case Ignore case in filters (case insensitive) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M) + --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -P, --progress Show progress during transfer. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-allow-origin string Set the allowed origin for CORS. + --rc-baseurl string Prefix for URLs - leave blank for root. + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval duration interval to check for expired async jobs (default 10s) + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-update Update / Force update to latest version of web gui + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-one-line-date Enables --stats-one-line and add current date/time prefix. + --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix to add to changed files. + --suffix-keep-extension Preserve the extension when using --suffix. + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-json-log Use json log format. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0") + -v, --verbose count Print lots more stuff (repeat for more) +``` + +## Backend Flags + +These flags are available for every command. They control the backends +and may be set in the config file. + +``` + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w) + --b2-download-url string Custom endpoint for downloads. + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + -L, --copy-links Follow symlinks and copy the pointed to item. + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-size-as-quota Show storage quota usage for file size. + --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl + --fichier-shared-folder string If you want to download a shared folder, add this parameter + --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-host string FTP host to connect to + --ftp-no-check-certificate Do not verify the TLS certificate of the server + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-tls Use FTP over TLS (Implicit) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --gphotos-client-id string Google Application Client Id + --gphotos-client-secret string Google Application Client Secret + --gphotos-read-only Set to make the Google Photos backend read only. + --gphotos-read-size Set to read the size of media items. + --http-headers CommaSepList Set HTTP headers for all transactions + --http-no-slash Set this if the site doesn't end directories with / + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net") + --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used. + --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) + --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true) + --koofr-user string Your Koofr user name + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-case-insensitive Force the filesystem to report itself as case insensitive + --local-case-sensitive Force the filesystem to report itself as case sensitive. + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect. + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --skip-links Don't warn about skipped symlinks. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --union-remotes string List of space separated remotes. + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. +``` + +1Fichier +----------------------------------------- + +This is a backend for the [1ficher](https://1fichier.com) cloud +storage service. Note that a Premium subscription is required to use +the API. + +Paths are specified as `remote:path` + +Paths may be as deep as required, eg `remote:directory/subdirectory`. + +The initial setup for 1Fichier involves getting the API key from the website which you +need to do in your browser. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value +[snip] +XX / 1Fichier + \ "fichier" +[snip] +Storage> fichier +** See help for fichier backend at: https://rclone.org/fichier/ ** + +Your API Key, get it from https://1fichier.com/console/params.pl +Enter a string value. Press Enter for the default (""). +api_key> example_key + +Edit advanced config? (y/n) +y) Yes +n) No +y/n> +Remote config +-------------------- +[remote] +type = fichier +api_key = example_key +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +Once configured you can then use `rclone` like this, + +List directories in top level of your 1Fichier account + + rclone lsd remote: + +List all the files in your 1Fichier account + + rclone ls remote: + +To copy a local directory to a 1Fichier directory called backup + + rclone copy /home/source remote:backup + +### Modified time and hashes ### + +1Fichier does not support modification times. It supports the Whirlpool hash algorithm. + +### Duplicated files ### + +1Fichier can have two files with exactly the same name and path (unlike a +normal file system). + +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. + +### Forbidden characters ### + +1Fichier does not support the characters ``\ < > " ' ` $`` and spaces at the beginning of folder names. +`rclone` automatically escapes these to a unicode equivalent. The exception is `/`, +which cannot be escaped and will therefore lead to errors. + + +### Standard Options + +Here are the standard options specific to fichier (1Fichier). + +#### --fichier-api-key + +Your API Key, get it from https://1fichier.com/console/params.pl + +- Config: api_key +- Env Var: RCLONE_FICHIER_API_KEY +- Type: string +- Default: "" + +### Advanced Options + +Here are the advanced options specific to fichier (1Fichier). + +#### --fichier-shared-folder + +If you want to download a shared folder, add this parameter + +- Config: shared_folder +- Env Var: RCLONE_FICHIER_SHARED_FOLDER +- Type: string +- Default: "" + + + Alias ----------------------------------------- @@ -7539,51 +8637,11 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote +[snip] +XX / Alias for an existing remote \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" - 5 / Box - \ "box" - 6 / Cache a remote - \ "cache" - 7 / Dropbox - \ "dropbox" - 8 / Encrypt/Decrypt a remote - \ "crypt" - 9 / FTP Connection - \ "ftp" -10 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -11 / Google Drive - \ "drive" -12 / Hubic - \ "hubic" -13 / Local Disk - \ "local" -14 / Microsoft Azure Blob Storage - \ "azureblob" -15 / Microsoft OneDrive - \ "onedrive" -16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -17 / Pcloud - \ "pcloud" -18 / QingCloud Object Storage - \ "qingstor" -19 / SSH/SFTP Connection - \ "sftp" -20 / Webdav - \ "webdav" -21 / Yandex Disk - \ "yandex" -22 / http Connection - \ "http" -Storage> 1 +[snip] +Storage> alias Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". remote> /mnt/storage/backup @@ -7704,35 +8762,11 @@ n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive +[snip] +XX / Amazon Drive \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -Storage> 1 +[snip] +Storage> amazon cloud drive Amazon Application Client Id - required. client_id> your client ID goes here Amazon Application Client Secret - required. @@ -7994,17 +9028,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" [snip] -23 / http Connection - \ "http" +XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) + \ "s3" +[snip] Storage> s3 Choose your S3 provider. Choose a number from below, or type in your own value @@ -8158,6 +9185,8 @@ Choose a number from below, or type in your own value \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" + 8 / Intelligent-Tiering storage class + \ "INTELLIGENT_TIERING" storage_class> 1 Remote config -------------------- @@ -8285,6 +9314,8 @@ permissions are required to be available on the bucket being written to: * `PutObject` * `PutObjectACL` +When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required. + Example policy: ``` @@ -8307,7 +9338,12 @@ Example policy: "arn:aws:s3:::BUCKET_NAME/*", "arn:aws:s3:::BUCKET_NAME" ] - } + }, + { + "Effect": "Allow", + "Action": "s3:ListAllMyBuckets", + "Resource": "arn:aws:s3:::*" + } ] } ``` @@ -8866,6 +9902,8 @@ The storage class to use when storing new objects in S3. - Glacier storage class - "DEEP_ARCHIVE" - Glacier Deep Archive storage class + - "INTELLIGENT_TIERING" + - Intelligent-Tiering storage class #### --s3-storage-class @@ -9452,9 +10490,8 @@ n/s> n name> wasabi Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) +[snip] +XX / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" [snip] Storage> s3 @@ -9690,33 +10727,11 @@ n/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 +[snip] +XX / Backblaze B2 \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 3 +[snip] +Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key @@ -9961,6 +10976,34 @@ server to the nearest millisecond appended to them. Note that when using `--b2-versions` no file write operations are permitted, so you can't upload files or delete them. +### B2 and rclone link ### + +Rclone supports generating file share links for private B2 buckets. +They can either be for a file for example: + +``` +./rclone link B2:bucket/path/to/file.txt +https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx + +``` + +or if run on a directory you will get: + +``` +./rclone link B2:bucket/path +https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx +``` + +you can then use the authorization token (the part of the url from the + `?Authorization=` on) on any file path under that directory. For example: + +``` +https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx + +``` + ### Standard Options @@ -10079,6 +11122,7 @@ Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. +This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. - Config: download_url @@ -10086,6 +11130,18 @@ Leave blank if you want to use the endpoint provided by Backblaze. - Type: string - Default: "" +#### --b2-download-auth-duration + +Time before the authorization token will expire in s or suffix ms|s|m|h|d. + +The duration before the download authorization token will expire. +The minimum value is 1 second. The maximum value is one week. + +- Config: download_auth_duration +- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION +- Type: Duration +- Default: 1w + Box @@ -10113,38 +11169,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box +[snip] +XX / Box \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" -10 / Hubic - \ "hubic" -11 / Local Disk - \ "local" -12 / Microsoft OneDrive - \ "onedrive" -13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -14 / SSH/SFTP Connection - \ "sftp" -15 / Yandex Disk - \ "yandex" -16 / http Connection - \ "http" +[snip] Storage> box Box App Client Id - leave blank normally. client_id> @@ -10397,11 +11425,11 @@ n/r/c/s/q> n name> test-cache Type of storage to configure. Choose a number from below, or type in your own value -... - 5 / Cache a remote +[snip] +XX / Cache a remote \ "cache" -... -Storage> 5 +[snip] +Storage> cache Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -11000,33 +12028,11 @@ n/s/q> n name> secret Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote +[snip] +XX / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 5 +[snip] +Storage> crypt Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -11500,33 +12506,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox +[snip] +XX / Dropbox \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 4 +[snip] +Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. @@ -11694,7 +12678,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] -10 / FTP Connection +XX / FTP Connection \ "ftp" [snip] Storage> ftp @@ -11888,33 +12872,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) +[snip] +XX / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 6 +[snip] +Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. @@ -12326,7 +13288,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -10 / Google Drive +XX / Google Drive \ "drive" [snip] Storage> drive @@ -13229,7 +14191,7 @@ be the same account as the Google Drive you want to access) 2. Select a project or create a new project. 3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the -then "Google Drive API". +"Google Drive API". 4. Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials", then @@ -13244,6 +14206,362 @@ in rclone config to add a new remote or edit an existing remote. (Thanks to @balazer on github for these instructions.) +Google Photos +------------------------------------------------- + +The rclone backend for [Google Photos](https://www.google.com/photos/about/) is +a specialized backend for transferring photos and videos to and from +Google Photos. + +**NB** The Google Photos API which rclone uses has quite a few +limitations, so please read the [limitations section](#limitations) +carefully to make sure it is suitable for your use. + +## Configuring Google Photos + +The initial setup for google cloud storage involves getting a token from Google Photos +which you need to do in your browser. `rclone config` walks you +through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value +[snip] +XX / Google Photos + \ "google photos" +[snip] +Storage> google photos +** See help for google photos backend at: https://rclone.org/googlephotos/ ** + +Google Application Client Id +Leave blank normally. +Enter a string value. Press Enter for the default (""). +client_id> +Google Application Client Secret +Leave blank normally. +Enter a string value. Press Enter for the default (""). +client_secret> +Set to make the Google Photos backend read only. + +If you choose read only then rclone will only request read only access +to your photos, otherwise rclone will request full access. +Enter a boolean value (true or false). Press Enter for the default ("false"). +read_only> +Edit advanced config? (y/n) +y) Yes +n) No +y/n> n +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code + +*** IMPORTANT: All media items uploaded to Google Photos with rclone +*** are stored in full resolution at original quality. These uploads +*** will count towards storage in your Google Account. + +-------------------- +[remote] +type = google photos +token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on `http://127.0.0.1:53682/` and this +may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +This remote is called `remote` and can now be used like this + +See all the albums in your photos + + rclone lsd remote:album + +Make a new album + + rclone mkdir remote:album/newAlbum + +List the contents of an album + + rclone ls remote:album/newAlbum + +Sync `/home/local/images` to the Google Photos, removing any excess +files in the album. + + rclone sync /home/local/image remote:album/newAlbum + +## Layout + +As Google Photos is not a general purpose cloud storage system the +backend is laid out to help you navigate it. + +The directories under `media` show different ways of categorizing the +media. Each file will appear multiple times. So if you want to make +a backup of your google photos you might choose to backup +`remote:media/by-month`. (**NB** `remote:media/by-day` is rather slow +at the moment so avoid for syncing.) + +Note that all your photos and videos will appear somewhere under +`media`, but they may not appear under `album` unless you've put them +into albums. + +``` +/ +- upload + - file1.jpg + - file2.jpg + - ... +- media + - all + - file1.jpg + - file2.jpg + - ... + - by-year + - 2000 + - file1.jpg + - ... + - 2001 + - file2.jpg + - ... + - ... + - by-month + - 2000 + - 2000-01 + - file1.jpg + - ... + - 2000-02 + - file2.jpg + - ... + - ... + - by-day + - 2000 + - 2000-01-01 + - file1.jpg + - ... + - 2000-01-02 + - file2.jpg + - ... + - ... +- album + - album name + - album name/sub +- shared-album + - album name + - album name/sub +``` + +There are two writable parts of the tree, the `upload` directory and +sub directories of the the `album` directory. + +The `upload` directory is for uploading files you don't want to put +into albums. This will be empty to start with and will contain the +files you've uploaded for one rclone session only, becoming empty +again when you restart rclone. The use case for this would be if you +have a load of files you just want to once off dump into Google +Photos. For repeated syncing, uploading to `album` will work better. + +Directories within the `album` directory are also writeable and you +may create new directories (albums) under `album`. If you copy files +with a directory hierarchy in there then rclone will create albums +with the `/` character in them. For example if you do + + rclone copy /path/to/images remote:album/images + +and the images directory contains + +``` +images + - file1.jpg + dir + file2.jpg + dir2 + dir3 + file3.jpg +``` + +Then rclone will create the following albums with the following files in + +- images + - file1.jpg +- images/dir + - file2.jpg +- images/dir2/dir3 + - file3.jpg + +This means that you can use the `album` path pretty much like a normal +filesystem and it is a good target for repeated syncing. + +The `shared-album` directory shows albums shared with you or by you. +This is similar to the Sharing tab in the Google Photos web interface. + +## Limitations + +Only images and videos can be uploaded. If you attempt to upload non +videos or images or formats that Google Photos doesn't understand, +rclone will upload the file, then Google Photos will give an error +when it is put turned into a media item. + +Note that all media items uploaded to Google Photos through the API +are stored in full resolution at "original quality" and **will** count +towards your storage quota in your Google Account. The API does +**not** offer a way to upload in "high quality" mode.. + +### Downloading Images + +When Images are downloaded this strips EXIF location (according to the +docs and my tests). This is a limitation of the Google Photos API and +is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115). + +### Downloading Videos + +When videos are downloaded they are downloaded in a really compressed +version of the video compared to downloading it via the Google Photos +web interface. This is covered by [bug #113672044](https://issuetracker.google.com/issues/113672044). + +### Duplicates + +If a file name is duplicated in a directory then rclone will add the +file ID into its name. So two files called `file.jpg` would then +appear as `file {123456}.jpg` and `file {ABCDEF}.jpg` (the actual IDs +are a lot longer alas!). + +If you upload the same image (with the same binary data) twice then +Google Photos will deduplicate it. However it will retain the +filename from the first upload which may confuse rclone. For example +if you uploaded an image to `upload` then uploaded the same image to +`album/my_album` the filename of the image in `album/my_album` will be +what it was uploaded with initially, not what you uploaded it with to +`album`. In practise this shouldn't cause too many problems. + +### Modified time + +The date shown of media in Google Photos is the creation date as +determined by the EXIF information, or the upload date if that is not +known. + +This is not changeable by rclone and is not the modification date of +the media on local disk. This means that rclone cannot use the dates +from Google Photos for syncing purposes. + +### Size + +The Google Photos API does not return the size of media. This means +that when syncing to Google Photos, rclone can only do a file +existence check. + +It is possible to read the size of the media, but this needs an extra +HTTP HEAD request per media item so is very slow and uses up a lot of +transactions. This can be enabled with the `--gphotos-read-size` +option or the `read_size = true` config parameter. + +If you want to use the backend with `rclone mount` you will need to +enable this flag otherwise you will not be able to read media off the +mount. + +### Albums + +Rclone can only upload files to albums it created. This is a +[limitation of the Google Photos API](https://developers.google.com/photos/library/guides/manage-albums). + +Rclone can remove files it uploaded from albums it created only. + +### Deleting files + +Rclone can remove files from albums it created, but note that the +Google Photos API does not allow media to be deleted permanently so +this media will still remain. See [bug #109759781](https://issuetracker.google.com/issues/109759781). + +Rclone cannot delete files anywhere except under `album`. + +### Deleting albums + +The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733). + + +### Standard Options + +Here are the standard options specific to google photos (Google Photos). + +#### --gphotos-client-id + +Google Application Client Id +Leave blank normally. + +- Config: client_id +- Env Var: RCLONE_GPHOTOS_CLIENT_ID +- Type: string +- Default: "" + +#### --gphotos-client-secret + +Google Application Client Secret +Leave blank normally. + +- Config: client_secret +- Env Var: RCLONE_GPHOTOS_CLIENT_SECRET +- Type: string +- Default: "" + +#### --gphotos-read-only + +Set to make the Google Photos backend read only. + +If you choose read only then rclone will only request read only access +to your photos, otherwise rclone will request full access. + +- Config: read_only +- Env Var: RCLONE_GPHOTOS_READ_ONLY +- Type: bool +- Default: false + +### Advanced Options + +Here are the advanced options specific to google photos (Google Photos). + +#### --gphotos-read-size + +Set to read the size of media items. + +Normally rclone does not read the size of media items since this takes +another transaction. This isn't necessary for syncing. However +rclone mount needs to know the size of files in advance of reading +them, so setting this flag when using rclone mount is recommended if +you want to read the media. + +- Config: read_size +- Env Var: RCLONE_GPHOTOS_READ_SIZE +- Type: bool +- Default: false + + + HTTP ------------------------------------------------- @@ -13272,36 +14590,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -15 / http Connection +[snip] +XX / http Connection \ "http" +[snip] Storage> http URL of http host to connect to Choose a number from below, or type in your own value @@ -13389,6 +14681,25 @@ URL of http host to connect to Here are the advanced options specific to http (http Connection). +#### --http-headers + +Set HTTP headers for all transactions + +Use this to set additional HTTP headers for all transactions + +The input format is comma separated list of key,value pairs. Standard +[CSV encoding](https://godoc.org/encoding/csv) may be used. + +For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. + +You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. + + +- Config: headers +- Env Var: RCLONE_HTTP_HEADERS +- Type: CommaSepList +- Default: + #### --http-no-slash Set this if the site doesn't end directories with / @@ -13435,33 +14746,11 @@ n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic +[snip] +XX / Hubic \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk - \ "yandex" -Storage> 8 +[snip] +Storage> hubic Hubic Client Id - leave blank normally. client_id> Hubic Client Secret - leave blank normally. @@ -13632,15 +14921,12 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] -14 / JottaCloud +XX / JottaCloud \ "jottacloud" [snip] Storage> jottacloud ** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** -User Name: -Enter a string value. Press Enter for the default (""). -user> user@email.tld Edit advanced config? (y/n) y) Yes n) No @@ -13654,6 +14940,7 @@ Rclone has it's own Jottacloud API KEY which works fine as long as one only uses y) Yes n) No y/n> y +Username> 0xC4KE@gmail.com Your Jottacloud password is only required during setup and will not be stored. password: @@ -13665,7 +14952,7 @@ y/n> y Please select the device to use. Normally this will be Jotta Choose a number from below, or type in an existing value 1 > DESKTOP-3H31129 - 2 > test1 + 2 > fla1 3 > Jotta Devices> 3 Please select the mountpoint to user. Normally this will be Archive @@ -13756,19 +15043,6 @@ and the current usage. Jottacloud requires each 'device' to be registered. Rclone brings such a registration to easily access your account but if you want to use Jottacloud together with rclone on multiple machines you NEED to create a seperate deviceID/deviceSecrect on each machine. You will asked during setting up the remote. Please be aware that this also means that copying the rclone config from one machine to another does NOT work with Jottacloud accounts. You have to create it on each machine. -### Standard Options - -Here are the standard options specific to jottacloud (JottaCloud). - -#### --jottacloud-user - -User Name: - -- Config: user -- Env Var: RCLONE_JOTTACLOUD_USER -- Type: string -- Default: "" - ### Advanced Options Here are the advanced options specific to jottacloud (JottaCloud). @@ -13853,60 +15127,10 @@ name> koofr Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 1 / A stackable unification remote, which can appear to merge the contents of several remotes - \ "union" - 2 / Alias for an existing remote - \ "alias" - 3 / Amazon Drive - \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) - \ "s3" - 5 / Backblaze B2 - \ "b2" - 6 / Box - \ "box" - 7 / Cache a remote - \ "cache" - 8 / Dropbox - \ "dropbox" - 9 / Encrypt/Decrypt a remote - \ "crypt" -10 / FTP Connection - \ "ftp" -11 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -12 / Google Drive - \ "drive" -13 / Hubic - \ "hubic" -14 / JottaCloud - \ "jottacloud" -15 / Koofr +[snip] +XX / Koofr \ "koofr" -16 / Local Disk - \ "local" -17 / Mega - \ "mega" -18 / Microsoft Azure Blob Storage - \ "azureblob" -19 / Microsoft OneDrive - \ "onedrive" -20 / OpenDrive - \ "opendrive" -21 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -22 / Pcloud - \ "pcloud" -23 / QingCloud Object Storage - \ "qingstor" -24 / SSH/SFTP Connection - \ "sftp" -25 / Webdav - \ "webdav" -26 / Yandex Disk - \ "yandex" -27 / http Connection - \ "http" +[snip] Storage> koofr ** See help for koofr backend at: https://rclone.org/koofr/ ** @@ -14002,6 +15226,15 @@ Mount ID of the mount to use. If omitted, the primary mount is used. - Type: string - Default: "" +#### --koofr-setmtime + +Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. + +- Config: setmtime +- Env Var: RCLONE_KOOFR_SETMTIME +- Type: bool +- Default: true + ### Limitations ### @@ -14040,14 +15273,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" [snip] -14 / Mega +XX / Mega \ "mega" [snip] -23 / http Connection - \ "http" Storage> mega User name user> you@example.com @@ -14236,40 +15465,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" -10 / Hubic - \ "hubic" -11 / Local Disk - \ "local" -12 / Microsoft Azure Blob Storage +[snip] +XX / Microsoft Azure Blob Storage \ "azureblob" -13 / Microsoft OneDrive - \ "onedrive" -14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -15 / SSH/SFTP Connection - \ "sftp" -16 / Yandex Disk - \ "yandex" -17 / http Connection - \ "http" +[snip] Storage> azureblob Storage Account Name account> account_name @@ -14384,7 +15583,7 @@ Here are the standard options specific to azureblob (Microsoft Azure Blob Storag #### --azureblob-account -Storage Account Name (leave blank to use connection string or SAS URL) +Storage Account Name (leave blank to use SAS URL or Emulator) - Config: account - Env Var: RCLONE_AZUREBLOB_ACCOUNT @@ -14393,7 +15592,7 @@ Storage Account Name (leave blank to use connection string or SAS URL) #### --azureblob-key -Storage Account Key (leave blank to use connection string or SAS URL) +Storage Account Key (leave blank to use SAS URL or Emulator) - Config: key - Env Var: RCLONE_AZUREBLOB_KEY @@ -14403,13 +15602,22 @@ Storage Account Key (leave blank to use connection string or SAS URL) #### --azureblob-sas-url SAS URL for container level access only -(leave blank if using account/key or connection string) +(leave blank if using account/key or Emulator) - Config: sas_url - Env Var: RCLONE_AZUREBLOB_SAS_URL - Type: string - Default: "" +#### --azureblob-use-emulator + +Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) + +- Config: use_emulator +- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR +- Type: bool +- Default: false + ### Advanced Options Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). @@ -14489,6 +15697,12 @@ tiering blob to "Hot" or "Cool". MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. +### Azure Storage Emulator Support ### +You can test rlcone with storage emulator locally, to do this make sure azure storage emulator +installed locally and set up a new remote with `rclone config` follow instructions described in +introduction, set `use_emulator` config as `true`, you do not need to provide default account name +or key if using emulator. + Microsoft OneDrive ----------------------------------------- @@ -14519,11 +15733,11 @@ name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value -... -18 / Microsoft OneDrive +[snip] +XX / Microsoft OneDrive \ "onedrive" -... -Storage> 18 +[snip] +Storage> onedrive Microsoft App Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). @@ -14815,35 +16029,11 @@ e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / OpenDrive +[snip] +XX / OpenDrive \ "opendrive" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection - \ "sftp" -14 / Yandex Disk - \ "yandex" -Storage> 10 +[snip] +Storage> opendrive Username username> Password @@ -14942,37 +16132,11 @@ n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / QingStor Object Storage +[snip] +XX / QingStor Object Storage \ "qingstor" -14 / SSH/SFTP Connection - \ "sftp" -15 / Yandex Disk - \ "yandex" -Storage> 13 +[snip] +Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step @@ -15230,48 +16394,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Cache a remote - \ "cache" - 6 / Dropbox - \ "dropbox" - 7 / Encrypt/Decrypt a remote - \ "crypt" - 8 / FTP Connection - \ "ftp" - 9 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -10 / Google Drive - \ "drive" -11 / Hubic - \ "hubic" -12 / Local Disk - \ "local" -13 / Microsoft Azure Blob Storage - \ "azureblob" -14 / Microsoft OneDrive - \ "onedrive" -15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) +[snip] +XX / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" -16 / Pcloud - \ "pcloud" -17 / QingCloud Object Storage - \ "qingstor" -18 / SSH/SFTP Connection - \ "sftp" -19 / Webdav - \ "webdav" -20 / Yandex Disk - \ "yandex" -21 / http Connection - \ "http" +[snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value @@ -15756,44 +16882,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" -10 / Hubic - \ "hubic" -11 / Local Disk - \ "local" -12 / Microsoft Azure Blob Storage - \ "azureblob" -13 / Microsoft OneDrive - \ "onedrive" -14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -15 / Pcloud +[snip] +XX / Pcloud \ "pcloud" -16 / QingCloud Object Storage - \ "qingstor" -17 / SSH/SFTP Connection - \ "sftp" -18 / Yandex Disk - \ "yandex" -19 / http Connection - \ "http" +[snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> @@ -15888,12 +16980,226 @@ Leave blank normally. +premiumize.me +----------------------------------------- + +Paths are specified as `remote:path` + +Paths may be as deep as required, eg `remote:directory/subdirectory`. + +The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you +need to do in your browser. `rclone config` walks you through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value +[snip] +XX / premiumize.me + \ "premiumizeme" +[snip] +Storage> premiumizeme +** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** + +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = premiumizeme +token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> +``` + +See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a +machine with no Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from premiumize.me. This only runs from the moment it opens +your browser to the moment you get back the verification code. This +is on `http://127.0.0.1:53682/` and this it may require you to unblock +it temporarily if you are running a host firewall. + +Once configured you can then use `rclone` like this, + +List directories in top level of your premiumize.me + + rclone lsd remote: + +List all the files in your premiumize.me + + rclone ls remote: + +To copy a local directory to an premiumize.me directory called backup + + rclone copy /home/source remote:backup + +### Modified time and hashes ### + +premiumize.me does not support modification times or hashes, therefore +syncing will default to `--size-only` checking. Note that using +`--update` will work. + + +### Standard Options + +Here are the standard options specific to premiumizeme (premiumize.me). + +#### --premiumizeme-api-key + +API Key. + +This is not normally used - use oauth instead. + + +- Config: api_key +- Env Var: RCLONE_PREMIUMIZEME_API_KEY +- Type: string +- Default: "" + + + +### Limitations ### + +Note that premiumize.me is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". + +premiumize.me file names can't have the `\` or `"` characters in. +rclone maps these to and from an identical looking unicode equivalents +`\` and `"` + +premiumize.me only supports filenames up to 255 characters in length. + +put.io +--------------------------------- + +Paths are specified as `remote:path` + +put.io paths may be as deep as required, eg +`remote:directory/subdirectory`. + +The initial setup for put.io involves getting a token from put.io +which you need to do in your browser. `rclone config` walks you +through it. + +Here is an example of how to make a remote called `remote`. First run: + + rclone config + +This will guide you through an interactive setup process: + +``` +No remotes found - make a new one +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> putio +Type of storage to configure. +Enter a string value. Press Enter for the default (""). +Choose a number from below, or type in your own value +[snip] +XX / Put.io + \ "putio" +[snip] +Storage> putio +** See help for putio backend at: https://rclone.org/putio/ ** + +Remote config +Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine +y) Yes +n) No +y/n> y +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[putio] +type = putio +token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +putio putio + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> q +``` + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on `http://127.0.0.1:53682/` and this +it may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +You can then use it like this, + +List directories in top level of your put.io + + rclone lsd remote: + +List all the files in your put.io + + rclone ls remote: + +To copy a local directory to a put.io directory called backup + + rclone copy /home/source remote:backup + + + + SFTP ---------------------------------------- SFTP is the [Secure (or SSH) File Transfer Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). +The SFTP backend can be used with a number of different providers: + +* C14 +* rsync.net + SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. @@ -15920,36 +17226,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" -10 / Local Disk - \ "local" -11 / Microsoft OneDrive - \ "onedrive" -12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -13 / SSH/SFTP Connection +[snip] +XX / SSH/SFTP Connection \ "sftp" -14 / Yandex Disk - \ "yandex" -15 / http Connection - \ "http" +[snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value @@ -15959,22 +17239,22 @@ host> example.com SSH username, leave blank for current username, ncw user> sftpuser SSH port, leave blank to use default (22) -port> +port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. -key_file> +key_file> Remote config -------------------- [remote] host = example.com user = sftpuser -port = -pass = -key_file = +port = +pass = +key_file = -------------------- y) Yes this is OK e) Edit this remote @@ -16127,7 +17407,7 @@ when the ssh-agent contains many keys. #### --sftp-use-insecure-cipher -Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. +Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. - Config: use_insecure_cipher - Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER @@ -16137,7 +17417,7 @@ Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow p - "false" - Use default Cipher list. - "true" - - Enables the use of the aes128-cbc cipher. + - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. #### --sftp-disable-hashcheck @@ -16191,6 +17471,24 @@ Set the modified time on the remote if set. - Type: bool - Default: true +#### --sftp-md5sum-command + +The command used to read md5 hashes. Leave blank for autodetect. + +- Config: md5sum_command +- Env Var: RCLONE_SFTP_MD5SUM_COMMAND +- Type: string +- Default: "" + +#### --sftp-sha1sum-command + +The command used to read sha1 hashes. Leave blank for autodetect. + +- Config: sha1sum_command +- Env Var: RCLONE_SFTP_SHA1SUM_COMMAND +- Type: string +- Default: "" + ### Limitations ### @@ -16209,7 +17507,7 @@ return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. `about` will fail if it does not have shell -access or if `df` is not in the remote's PATH. +access or if `df` is not in the remote's PATH. Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them @@ -16232,6 +17530,19 @@ with it: `--dump-headers`, `--dump-bodies`, `--dump-auth` Note that `--timeout` isn't supported (but `--contimeout` is). + +## C14 {#c14} + +C14 is supported through the SFTP backend. + +See [C14's documentation](https://www.online.net/en/storage/c14-cold-storage) + +## rsync.net {#rsync-net} + +rsync.net is supported through the SFTP backend. + +See [rsync.net's documentation of rclone examples](https://www.rsync.net/products/rclone.html). + Union ----------------------------------------- @@ -16272,58 +17583,10 @@ n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" - 5 / Box - \ "box" - 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes +[snip] +XX / Union merges the contents of several remotes \ "union" - 7 / Cache a remote - \ "cache" - 8 / Dropbox - \ "dropbox" - 9 / Encrypt/Decrypt a remote - \ "crypt" -10 / FTP Connection - \ "ftp" -11 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" -12 / Google Drive - \ "drive" -13 / Hubic - \ "hubic" -14 / JottaCloud - \ "jottacloud" -15 / Local Disk - \ "local" -16 / Mega - \ "mega" -17 / Microsoft Azure Blob Storage - \ "azureblob" -18 / Microsoft OneDrive - \ "onedrive" -19 / OpenDrive - \ "opendrive" -20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -21 / Pcloud - \ "pcloud" -22 / QingCloud Object Storage - \ "qingstor" -23 / SSH/SFTP Connection - \ "sftp" -24 / Webdav - \ "webdav" -25 / Yandex Disk - \ "yandex" -26 / http Connection - \ "http" +[snip] Storage> union List of space separated remotes. Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc. @@ -16373,7 +17636,7 @@ Copy another local directory to the union directory called source, which will be ### Standard Options -Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes). +Here are the standard options specific to union (Union merges the contents of several remotes). #### --union-remotes @@ -16415,7 +17678,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -22 / Webdav +XX / Webdav \ "webdav" [snip] Storage> webdav @@ -16447,7 +17710,7 @@ password: Confirm the password: password: Bearer token instead of user/pass (eg a Macaroon) -bearer_token> +bearer_token> Remote config -------------------- [remote] @@ -16456,7 +17719,7 @@ url = https://example.com/remote.php/webdav/ vendor = nextcloud user = user pass = *** ENCRYPTED *** -bearer_token = +bearer_token = -------------------- y) Yes this is OK e) Edit this remote @@ -16551,6 +17814,19 @@ Bearer token instead of user/pass (eg a Macaroon) - Type: string - Default: "" +### Advanced Options + +Here are the advanced options specific to webdav (Webdav). + +#### --webdav-bearer-token-command + +Command to run to get a bearer token + +- Config: bearer_token_command +- Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND +- Type: string +- Default: "" + ## Provider notes ## @@ -16573,31 +17849,6 @@ Owncloud does. This [may be fixed](https://github.com/nextcloud/nextcloud-snap/issues/365) in the future. -### Put.io ### - -put.io can be accessed in a read only way using webdav. - -Configure the `url` as `https://webdav.put.io` and use your normal -account username and password for `user` and `pass`. Set the `vendor` -to `other`. - -Your config file should end up looking like this: - -``` -[putio] -type = webdav -url = https://webdav.put.io -vendor = other -user = YourUserName -pass = encryptedpassword -``` - -If you are using `put.io` with `rclone mount` then use the -`--read-only` flag to signal to the OS that it can't write to the -mount. - -For more help see [the put.io webdav docs](http://help.put.io/apps-and-integrations/ftp-and-webdav). - ### Sharepoint ### Rclone can be used with Sharepoint provided by OneDrive for Business @@ -16641,8 +17892,13 @@ pass = encryptedpassword ### dCache ### -[dCache](https://www.dcache.org/) is a storage system with WebDAV doors that support, beside basic and x509, -authentication with [Macaroons](https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) (bearer tokens). +dCache is a storage system that supports many protocols and +authentication/authorisation schemes. For WebDAV clients, it allows +users to authenticate with username and password (BASIC), X.509, +Kerberos, and various bearer tokens, including +[Macaroons](https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) +and [OpenID-Connect](https://en.wikipedia.org/wiki/OpenID_Connect) +access tokens. Configure as normal using the `other` type. Don't enter a username or password, instead enter your Macaroon as the `bearer_token`. @@ -16662,6 +17918,55 @@ bearer_token = your-macaroon There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. +Macaroons may also be obtained from the dCacheView +web-browser/JavaScript client that comes with dCache. + +### OpenID-Connect ### + +dCache also supports authenticating with OpenID-Connect access tokens. +OpenID-Connect is a protocol (based on OAuth 2.0) that allows services +to identify users who have authenticated with some central service. + +Support for OpenID-Connect in rclone is currently achieved using +another software package called +[oidc-agent](https://github.com/indigo-dc/oidc-agent). This is a +command-line tool that facilitates obtaining an access token. Once +installed and configured, an access token is obtained by running the +`oidc-token` command. The following example shows a (shortened) +access token obtained from the *XDC* OIDC Provider. + +``` +paul@celebrimbor:~$ oidc-token XDC +eyJraWQ[...]QFXDt0 +paul@celebrimbor:~$ +``` + +**Note** Before the `oidc-token` command will work, the refresh token +must be loaded into the oidc agent. This is done with the `oidc-add` +command (e.g., `oidc-add XDC`). This is typically done once per login +session. Full details on this and how to register oidc-agent with +your OIDC Provider are provided in the [oidc-agent +documentation](https://indigo-dc.gitbooks.io/oidc-agent/). + +The rclone `bearer_token_command` configuration option is used to +fetch the access token from oidc-agent. + +Configure as a normal WebDAV endpoint, using the 'other' vendor, +leaving the username and password empty. When prompted, choose to +edit the advanced config and enter the command to get a bearer token +(e.g., `oidc-agent XDC`). + +The following example config shows a WebDAV endpoint that uses +oidc-agent to supply an access token from the *XDC* OIDC Provider. + +``` +[dcache] +type = webdav +url = https://dcache.example.org/ +vendor = other +bearer_token_command = oidc-token XDC +``` + Yandex Disk ---------------------------------------- @@ -16683,33 +17988,11 @@ n/s> n name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" -10 / Microsoft OneDrive - \ "onedrive" -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" -12 / SSH/SFTP Connection - \ "sftp" -13 / Yandex Disk +[snip] +XX / Yandex Disk \ "yandex" -Storage> 13 +[snip] +Storage> yandex Yandex Client Id - leave blank normally. client_id> Yandex Client Secret - leave blank normally. @@ -17153,10 +18436,158 @@ Don't cross filesystem boundaries (unix/macOS only). - Type: bool - Default: false +#### --local-case-sensitive + +Force the filesystem to report itself as case sensitive. + +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. Use this flag +to override the default choice. + +- Config: case_sensitive +- Env Var: RCLONE_LOCAL_CASE_SENSITIVE +- Type: bool +- Default: false + +#### --local-case-insensitive + +Force the filesystem to report itself as case insensitive + +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. Use this flag +to override the default choice. + +- Config: case_insensitive +- Env Var: RCLONE_LOCAL_CASE_INSENSITIVE +- Type: bool +- Default: false + # Changelog +## v1.49.0 - 2019-08-26 + +* New backends + * [1fichier](https://rclone.org/fichier/) (Laura Hausmann) + * [Google Photos](/googlephotos) (Nick Craig-Wood) + * [Putio](https://rclone.org/putio/) (Cenk Alti) + * [premiumize.me](https://rclone.org/premiumizeme/) (Nick Craig-Wood) +* New Features + * Experimental [web GUI](https://rclone.org/gui/) (Chaitanya Bankanhal) + * Implement `--compare-dest` & `--copy-dest` (yparitcher) + * Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher) + * Add `--use-json-log` for JSON logging (justinalin) + * Add `config reconnect`, `config userinfo` and `config disconnect` subcommands. (Nick Craig-Wood) + * Add context propagation to rclone (Aleksandar Jankovic) + * Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic) + * Add Higher units for ETA (AbelThar) + * Update rclone logos to new design (Andreas Chlupka) + * hash: Add CRC-32 support (Cenk Alti) + * help showbackend: Fixed advanced option category when there are no standard options (buengese) + * ncdu: Display/Copy to Clipboard Current Path (Gary Kim) + * operations: + * Run hashing operations in parallel (Nick Craig-Wood) + * Don't calculate checksums when using `--ignore-checksum` (Nick Craig-Wood) + * Check transfer hashes when using `--size-only` mode (Nick Craig-Wood) + * Disable multi thread copy for local to local copies (Nick Craig-Wood) + * Debug successful hashes as well as failures (Nick Craig-Wood) + * rc + * Add ability to stop async jobs (Aleksandar Jankovic) + * Return current settings if core/bwlimit called without parameters (Nick Craig-Wood) + * Rclone-WebUI integration with rclone (Chaitanya Bankanhal) + * Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement) (Chaitanya Bankanhal) + * Add anchor tags to the docs so links are consistent (Nick Craig-Wood) + * Remove _async key from input parameters after parsing so later operations won't get confused (buengese) + * Add call to clear stats (Aleksandar Jankovic) + * rcd + * Auto-login for web-gui (Chaitanya Bankanhal) + * Implement `--baseurl` for rcd and web-gui (Chaitanya Bankanhal) + * serve dlna + * Only select interfaces which can multicast for SSDP (Nick Craig-Wood) + * Add more builtin mime types to cover standard audio/video (Nick Craig-Wood) + * Fix missing mime types on Android causing missing videos (Nick Craig-Wood) + * serve ftp + * Refactor to bring into line with other serve commands (Nick Craig-Wood) + * Implement `--auth-proxy` (Nick Craig-Wood) + * serve http: Implement `--baseurl` (Nick Craig-Wood) + * serve restic: Implement `--baseurl` (Nick Craig-Wood) + * serve sftp + * Implement auth proxy (Nick Craig-Wood) + * Fix detection of whether server is authorized (Nick Craig-Wood) + * serve webdav + * Implement `--baseurl` (Nick Craig-Wood) + * Support `--auth-proxy` (Nick Craig-Wood) +* Bug Fixes + * Make "bad record MAC" a retriable error (Nick Craig-Wood) + * copyurl: Fix copying files that return HTTP errors (Nick Craig-Wood) + * march: Fix checking sub-directories when using `--no-traverse` (buengese) + * rc + * Fix unmarshalable http.AuthFn in options and put in test for marshalability (Nick Craig-Wood) + * Move job expire flags to rc to fix initalization problem (Nick Craig-Wood) + * Fix `--loopback` with rc/list and others (Nick Craig-Wood) + * rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood) + * rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood) +* Mount + * Default `--deamon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood) + * Update docs to show mounting from root OK for bucket based (Nick Craig-Wood) + * Remove nonseekable flag from write files (Nick Craig-Wood) +* VFS + * Make write without cache more efficient (Nick Craig-Wood) + * Fix `--vfs-cache-mode minimal` and `writes` ignoring cached files (Nick Craig-Wood) +* Local + * Add `--local-case-sensitive` and `--local-case-insensitive` (Nick Craig-Wood) + * Avoid polluting page cache when uploading local files to remote backends (Michał Matczuk) + * Don't calculate any hashes by default (Nick Craig-Wood) + * Fadvise run syscall on a dedicated go routine (Michał Matczuk) +* Azure Blob + * Azure Storage Emulator support (Sandeep) + * Updated config help details to remove connection string references (Sandeep) + * Make all operations work from the root (Nick Craig-Wood) +* B2 + * Implement link sharing (yparitcher) + * Enable server side copy to copy between buckets (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) +* Drive + * Fix server side copy of big files (Nick Craig-Wood) + * Update API for teamdrive use (Nick Craig-Wood) + * Add error for purge with `--drive-trashed-only` (ginvine) +* Fichier + * Make FolderID int and adjust related code (buengese) +* Google Cloud Storage + * Reduce oauth scope requested as suggested by Google (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) +* HTTP + * Add `--http-headers` flag for setting arbitrary headers (Nick Craig-Wood) +* Jottacloud + * Use new api for retrieving internal username (buengese) + * Refactor configuration and minor cleanup (buengese) +* Koofr + * Support setting modification times on Koofr backend. (jaKa) +* Opendrive + * Refactor to use existing lib/rest facilities for uploads (Nick Craig-Wood) +* Qingstor + * Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) +* S3 + * Add INTELLIGENT_TIERING storage class (Matti Niemenmaa) + * Make all operations work from the root (Nick Craig-Wood) +* SFTP + * Add missing interface check and fix About (Nick Craig-Wood) + * Completely ignore all modtime checks if SetModTime=false (Jon Fautley) + * Support md5/sha1 with rsync.net (Nick Craig-Wood) + * Save the md5/sha1 command in use to the config file for efficiency (Nick Craig-Wood) + * Opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 (Yi FU) +* Swift + * Use FixRangeOption to fix 0 length files via the VFS (Nick Craig-Wood) + * Fix upload when using no_chunk to return the correct size (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) + * Fix segments leak during failed large file uploads. (nguyenhuuluan434) +* WebDAV + * Add `--webdav-bearer-token-command` (Nick Craig-Wood) + * Refresh token when it expires with `--webdav-bearer-token-command` (Nick Craig-Wood) + * Add docs for using bearer_token_command with oidc-agent (Paul Millar) + ## v1.48.0 - 2019-06-15 * New commands @@ -17488,10 +18919,10 @@ Don't cross filesystem boundaries (unix/macOS only). * Enable softfloat on MIPS arch (Scott Edlund) * Integration test framework revamped with a better report and better retries (Nick Craig-Wood) * Bug Fixes - * cmd: Make --progress update the stats correctly at the end (Nick Craig-Wood) + * cmd: Make `--progress` update the stats correctly at the end (Nick Craig-Wood) * config: Create config directory on save if it is missing (Nick Craig-Wood) * dedupe: Check for existing filename before renaming a dupe file (ssaqua) - * move: Don't create directories with --dry-run (Nick Craig-Wood) + * move: Don't create directories with `--dry-run` (Nick Craig-Wood) * operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood) * serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood) * Mount @@ -17538,13 +18969,13 @@ Don't cross filesystem boundaries (unix/macOS only). * Implement specialised help for flags and backends (Nick Craig-Wood) * Show URL of backend help page when starting config (Nick Craig-Wood) * stats: Long names now split in center (Joanna Marek) - * Add --log-format flag for more control over log output (dcpu) + * Add `--log-format` flag for more control over log output (dcpu) * rc: Add support for OPTIONS and basic CORS (frenos) * stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes) * Bug Fixes * Fix -P not ending with a new line (Nick Craig-Wood) - * config: don't create default config dir when user supplies --config (albertony) - * Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood) + * config: don't create default config dir when user supplies `--config` (albertony) + * Don't print non-ASCII characters with `--progress` on windows (Nick Craig-Wood) * Correct logs for excluded items (ssaqua) * Mount * Remove EXPERIMENTAL tags (Nick Craig-Wood) @@ -17572,19 +19003,19 @@ Don't cross filesystem boundaries (unix/macOS only). * Alias * Fix handling of Windows network paths (Nick Craig-Wood) * Azure Blob - * Add --azureblob-list-chunk parameter (Santiago Rodríguez) + * Add `--azureblob-list-chunk` parameter (Santiago Rodríguez) * Implemented settier command support on azureblob remote. (sandeepkru) * Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood) * Box * Implement link sharing. (Sebastian Bünger) * Drive - * Add --drive-import-formats - google docs can now be imported (Fabian Möller) + * Add `--drive-import-formats` - google docs can now be imported (Fabian Möller) * Rewrite mime type and extension handling (Fabian Möller) * Add document links (Fabian Möller) * Add support for multipart document extensions (Fabian Möller) * Add support for apps-script to json export (Fabian Möller) * Fix escaped chars in documents during list (Fabian Möller) - * Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller) + * Add `--drive-v2-download-min-size` a workaround for slow downloads (Fabian Möller) * Improve directory notifications in ChangeNotify (Fabian Möller) * When listing team drives in config, continue on failure (Nick Craig-Wood) * FTP @@ -17593,8 +19024,8 @@ Don't cross filesystem boundaries (unix/macOS only). * Fix service_account_file being ignored (Fabian Möller) * Jottacloud * Minor improvement in quota info (omit if unlimited) (albertony) - * Add --fast-list support (albertony) - * Add permanent delete support: --jottacloud-hard-delete (albertony) + * Add `--fast-list` support (albertony) + * Add permanent delete support: `--jottacloud-hard-delete` (albertony) * Add link sharing support (albertony) * Fix handling of reserved characters. (Sebastian Bünger) * Fix socket leak on Object.Remove (Nick Craig-Wood) @@ -17610,13 +19041,13 @@ Don't cross filesystem boundaries (unix/macOS only). * S3 * Use custom pacer, to retry operations when reasonable (Craig Miskell) * Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout) - * Make --s3-v2-auth flag (Nick Craig-Wood) + * Make `--s3-v2-auth` flag (Nick Craig-Wood) * Fix v2 auth on files with spaces (Nick Craig-Wood) * Union * Implement union backend which reads from multiple backends (Felix Brucker) * Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood) * Fix ChangeNotify to support multiple remotes (Fabian Möller) - * Fix --backup-dir on union backend (Nick Craig-Wood) + * Fix `--backup-dir` on union backend (Nick Craig-Wood) * WebDAV * Add another time format (Nick Craig-Wood) * Add a small pause after failed upload before deleting file (Nick Craig-Wood) @@ -17631,7 +19062,7 @@ Point release to fix hubic and azureblob backends. * Bug Fixes * ncdu: Return error instead of log.Fatal in Show (Fabian Möller) - * cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood) + * cmd: Fix crash with `--progress` and `--stats 0` (Nick Craig-Wood) * docs: Tidy website display (Anagh Kumar Baranwal) * Azure Blob: * Fix multi-part uploads. (sandeepkru) @@ -19018,31 +20449,41 @@ Point release to fix hubic and azureblob backends. * Project started -Bugs and Limitations --------------------- +# Bugs and Limitations -### Empty directories are left behind / not created ## +## Limitations -With remotes that have a concept of directory, eg Local and Drive, -empty directories may be left behind, or not created when one was -expected. +### Directory timestamps aren't preserved -This is because rclone doesn't have a concept of a directory - it only -works on objects. Most of the object storage systems can't actually -store a directory so there is nowhere for rclone to store anything -about directories. +Rclone doesn't currently preserve the timestamps of directories. This +is because rclone only really considers objects when syncing. -You can work round this to some extent with the`purge` command which -will delete everything under the path, **inluding** empty directories. +### Rclone struggles with millions of files in a directory -This may be fixed at some point in -[Issue #100](https://github.com/rclone/rclone/issues/100) +Currently rclone loads each directory entirely into memory before +using it. Since each Rclone object takes 0.5k-1k of memory this can +take a very long time and use an extremely large amount of memory. -### Directory timestamps aren't preserved ## +Millions of files in a directory tend caused by software writing cloud +storage (eg S3 buckets). -For the same reason as the above, rclone doesn't have a concept of a -directory - it only works on objects, therefore it can't preserve the -timestamps of directories. +### Bucket based remotes and folders + +Bucket based remotes (eg S3/GCS/Swift/B2) do not have a concept of +directories. Rclone therefore cannot create directories in them which +means that empty directories on a bucket based remote will tend to +disappear. + +Some software creates empty keys ending in `/` as directory markers. +Rclone doesn't do this as it potentially creates more objects and +costs more. It may do in future (probably with a flag). + +## Bugs + +Bugs are stored in rclone's Github project: + +* [Reported bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug) +* [Known issues](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22) Frequently Asked Questions -------------------------- @@ -19259,7 +20700,7 @@ This is free software under the terms of MIT the license (check the COPYING file included with the source code). ``` -Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/ +Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -19540,6 +20981,26 @@ Contributors * forgems * Florian Apolloner * Aleksandar Jankovic + * Maran + * nguyenhuuluan434 + * Laura Hausmann + * yparitcher + * AbelThar + * Matti Niemenmaa + * Russell Davis + * Yi FU + * Paul Millar + * justinalin + * EliEron + * justina777 + * Chaitanya Bankanhal + * Michał Matczuk + * Macavirus + * Abhinav Sharma + * ginvine <34869051+ginvine@users.noreply.github.com> + * Patrick Wang + * Cenk Alti + * Andreas Chlupka # Contact the rclone project # @@ -19566,5 +21027,7 @@ You can also follow me on twitter for rclone announcements: ## Email ## Or if all else fails or you want to ask something private or -confidential email [Nick Craig-Wood](mailto:nick@craig-wood.com) +confidential email [Nick Craig-Wood](mailto:nick@craig-wood.com). +Please don't email me requests for help - those are better directed to +the forum - thanks! diff --git a/MANUAL.txt b/MANUAL.txt index bff8f12ea..04a7ceaa4 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,29 +1,30 @@ rclone(1) User Manual Nick Craig-Wood -Jun 15, 2019 +Aug 26, 2019 -RCLONE +RCLONE - RSYNC FOR CLOUD STORAGE -[Logo] - Rclone is a command line program to sync files and directories to and from: +- 1Fichier - Alibaba Cloud (Aliyun) Object Storage System (OSS) - Amazon Drive (See note) - Amazon S3 - Backblaze B2 - Box - Ceph +- C14 - DigitalOcean Spaces - Dreamhost - Dropbox - FTP - Google Cloud Storage - Google Drive +- Google Photos - HTTP - Hubic - Jottacloud @@ -41,6 +42,7 @@ from: - Oracle Cloud Storage - ownCloud - pCloud +- premiumize.me - put.io - QingStor - Rackspace Cloud Files @@ -67,6 +69,7 @@ Features - Optional FUSE mount (rclone mount) - Multi-threaded downloads to local disk - Can serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna +- Experimental Web based GUI Links @@ -178,8 +181,8 @@ You can also build and install rclone in the GOPATH (which defaults to and this will build the binary in $GOPATH/bin (~/go/bin/rclone by default) after downloading the source to -$GOPATH/src/github.com/rclone/rclone (~/go/src/github.com/rclone/rclone by -default). +$GOPATH/src/github.com/rclone/rclone (~/go/src/github.com/rclone/rclone +by default). Installation with Ansible @@ -211,6 +214,7 @@ option: See the following for detailed instructions for +- 1Fichier - Alias - Amazon Drive - Amazon S3 @@ -223,6 +227,7 @@ See the following for detailed instructions for - FTP - Google Cloud Storage - Google Drive +- Google Photos - HTTP - Hubic - Jottacloud @@ -233,6 +238,8 @@ See the following for detailed instructions for - Openstack Swift / Rackspace Cloudfiles / Memset Memstore - OpenDrive - Pcloud +- premiumize.me +- put.io - QingStor - SFTP - Union @@ -281,23 +288,26 @@ Options -h, --help help for config +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. - rclone config create - Create a new remote with name, type and options. - rclone config delete - Delete an existing remote . +- rclone config disconnect - Disconnects user from remote - rclone config dump - Dump the config file as JSON. - rclone config edit - Enter an interactive configuration session. - rclone config file - Show path of configuration file in use. - rclone config password - Update password in an existing remote. - rclone config providers - List in JSON format all the providers and options. +- rclone config reconnect - Re-authenticates user with remote. - rclone config show - Print (decrypted) config file, or the config for a single remote. - rclone config update - Update options in an existing remote. - -Auto generated by spf13/cobra on 15-Jun-2019 +- rclone config userinfo - Prints info about logged in user of remote. rclone copy @@ -360,12 +370,12 @@ Options --create-empty-src-dirs Create empty source dirs on destination after copy -h, --help help for copy +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone sync @@ -401,12 +411,12 @@ Options --create-empty-src-dirs Create empty source dirs on destination after sync -h, --help help for sync +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone move @@ -447,12 +457,12 @@ Options --delete-empty-src-dirs Delete empty source dirs after move -h, --help help for move +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone delete @@ -487,12 +497,12 @@ Options -h, --help help for delete +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone purge @@ -510,12 +520,12 @@ Options -h, --help help for purge +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone mkdir @@ -531,12 +541,12 @@ Options -h, --help help for mkdir +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone rmdir @@ -553,12 +563,12 @@ Options -h, --help help for rmdir +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone check @@ -591,12 +601,12 @@ Options -h, --help help for check --one-way Check one way only, source files must exist on remote +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone ls @@ -644,12 +654,12 @@ Options -h, --help help for ls +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone lsd @@ -708,12 +718,12 @@ Options -h, --help help for lsd -R, --recursive Recurse into the listing. +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone lsl @@ -762,12 +772,12 @@ Options -h, --help help for lsl +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone md5sum @@ -784,12 +794,12 @@ Options -h, --help help for md5sum +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone sha1sum @@ -806,12 +816,12 @@ Options -h, --help help for sha1sum +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone size @@ -828,12 +838,12 @@ Options -h, --help help for size --json format output as JSON +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone version @@ -874,12 +884,12 @@ Options --check Check for new version. -h, --help help for version +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone cleanup @@ -896,12 +906,12 @@ Options -h, --help help for cleanup +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone dedupe @@ -1004,12 +1014,12 @@ Options --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive") -h, --help help for dedupe +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone about @@ -1067,12 +1077,12 @@ Options -h, --help help for about --json Format output as JSON +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone authorize @@ -1089,12 +1099,12 @@ Options -h, --help help for authorize +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone cachestats @@ -1110,12 +1120,12 @@ Options -h, --help help for cachestats +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone cat @@ -1153,12 +1163,12 @@ Options --offset int Start printing at offset N (or from end if -ve). --tail int Only print the last N characters. +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone config create @@ -1192,12 +1202,12 @@ Options -h, --help help for create +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone config delete @@ -1213,11 +1223,36 @@ Options -h, --help help for delete +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 + +rclone config disconnect + +Disconnects user from remote + +Synopsis + +This disconnects the remote: passed in to the cloud storage system. + +This normally means revoking the oauth token. + +To reconnect use “rclone config reconnect”. + + rclone config disconnect remote: [flags] + +Options + + -h, --help help for disconnect + +See the global flags page for global options not listed here. + +SEE ALSO + +- rclone config - Enter an interactive configuration session. rclone config dump @@ -1234,12 +1269,12 @@ Options -h, --help help for dump +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone config edit @@ -1257,12 +1292,12 @@ Options -h, --help help for edit +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone config file @@ -1278,12 +1313,12 @@ Options -h, --help help for file +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone config password @@ -1307,12 +1342,12 @@ Options -h, --help help for password +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone config providers @@ -1328,11 +1363,36 @@ Options -h, --help help for providers +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 + +rclone config reconnect + +Re-authenticates user with remote. + +Synopsis + +This reconnects remote: passed in to the cloud storage system. + +To disconnect the remote use “rclone config disconnect”. + +This normally means going through the interactive oauth flow again. + + rclone config reconnect remote: [flags] + +Options + + -h, --help help for reconnect + +See the global flags page for global options not listed here. + +SEE ALSO + +- rclone config - Enter an interactive configuration session. rclone config show @@ -1349,12 +1409,12 @@ Options -h, --help help for show +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone config update @@ -1384,11 +1444,34 @@ Options -h, --help help for update +See the global flags page for global options not listed here. + SEE ALSO - rclone config - Enter an interactive configuration session. -Auto generated by spf13/cobra on 15-Jun-2019 + +rclone config userinfo + +Prints info about logged in user of remote. + +Synopsis + +This prints the details of the person logged in to the cloud storage +system. + + rclone config userinfo remote: [flags] + +Options + + -h, --help help for userinfo + --json Format output as JSON + +See the global flags page for global options not listed here. + +SEE ALSO + +- rclone config - Enter an interactive configuration session. rclone copyto @@ -1430,12 +1513,12 @@ Options -h, --help help for copyto +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone copyurl @@ -1452,12 +1535,12 @@ Options -h, --help help for copyurl +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone cryptcheck @@ -1500,12 +1583,12 @@ Options -h, --help help for cryptcheck --one-way Check one way only, source files must exist on destination +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone cryptdecode @@ -1531,12 +1614,12 @@ Options -h, --help help for cryptdecode --reverse Reverse cryptdecode, encrypts filenames +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone dbhashsum @@ -1554,12 +1637,12 @@ Options -h, --help help for dbhashsum +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone deletefile @@ -1577,12 +1660,12 @@ Options -h, --help help for deletefile +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone genautocomplete @@ -1597,6 +1680,8 @@ Options -h, --help help for genautocomplete +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. @@ -1605,8 +1690,6 @@ SEE ALSO - rclone genautocomplete zsh - Output zsh completion script for rclone. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone genautocomplete bash @@ -1634,12 +1717,12 @@ Options -h, --help help for bash +See the global flags page for global options not listed here. + SEE ALSO - rclone genautocomplete - Output completion script for a given shell. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone genautocomplete zsh @@ -1667,12 +1750,12 @@ Options -h, --help help for zsh +See the global flags page for global options not listed here. + SEE ALSO - rclone genautocomplete - Output completion script for a given shell. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone gendocs @@ -1690,12 +1773,12 @@ Options -h, --help help for gendocs +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone hashsum @@ -1726,12 +1809,12 @@ Options -h, --help help for hashsum +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone link @@ -1756,12 +1839,12 @@ Options -h, --help help for link +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone listremotes @@ -1780,12 +1863,12 @@ Options -h, --help help for listremotes --long Show the type as well as names. +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone lsf @@ -1923,12 +2006,12 @@ Options -R, --recursive Recurse into the listing. -s, --separator string Separator for the items in the format. (default ";") +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone lsjson @@ -2018,12 +2101,12 @@ Options --original Show the ID of the underlying Object. -R, --recursive Recurse into the listing. +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone mount @@ -2091,11 +2174,9 @@ applications won’t work with their files on an rclone mount without section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, -Hubic) won’t work from the root - you will need to specify a bucket, or -a path within the bucket. So swift: won’t work whereas swift:bucket will -as will swift:bucket/path. None of these support the concept of -directories, so empty directories will have a tendency to disappear once -they fall out of the directory cache. +Hubic) do not support the concept of empty directories, so empty +directories will have a tendency to disappear once they fall out of the +directory cache. Only supported on Linux, FreeBSD, OS X and Windows at the moment. @@ -2339,12 +2420,12 @@ Options --volname string Set the volume name (not supported by all OSes). --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone moveto @@ -2388,12 +2469,12 @@ Options -h, --help help for moveto +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone ncdu @@ -2419,6 +2500,7 @@ Here are the keys - press ‘?’ to toggle the help on and off g toggle graph n,s,C sort by name,size,count d delete file/directory + Y display current path ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit @@ -2435,12 +2517,12 @@ Options -h, --help help for ncdu +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone obscure @@ -2456,12 +2538,12 @@ Options -h, --help help for obscure +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone rc @@ -2507,12 +2589,12 @@ Options --url string URL to connect to rclone remote control. (default "http://localhost:5572/") --user string Username to use to rclone remote control. +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone rcat @@ -2548,12 +2630,12 @@ Options -h, --help help for rcat +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone rcd @@ -2577,12 +2659,12 @@ Options -h, --help help for rcd +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone rmdirs @@ -2607,12 +2689,12 @@ Options -h, --help help for rmdirs --leave-root Do not remove root directory if empty +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone serve @@ -2633,6 +2715,8 @@ Options -h, --help help for serve +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. @@ -2643,8 +2727,6 @@ SEE ALSO - rclone serve sftp - Serve the remote over SFTP. - rclone serve webdav - Serve remote:path over webdav. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone serve dlna @@ -2831,12 +2913,12 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) +See the global flags page for global options not listed here. + SEE ALSO - rclone serve - Serve a remote over a protocol. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone serve ftp @@ -2996,11 +3078,70 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. +Auth Proxy + +If you supply the parameter --auth-proxy /path/to/program then rclone +will use that program to generate backends on the fly which then are +used to authenticate incoming requests. This uses a simple JSON based +protocl with input on STDIN and output on STDOUT. + +There is an example program bin/test_proxy.py in the rclone source code. + +The program’s job is to take a user and pass on the input and turn those +into the config for a backend on STDOUT in JSON format. This config will +have any default parameters for the backend added, but it won’t use +configuration from environment variables or command line options - it is +the job of the proxy program to make a complete config. + +This config generated must have this extra parameter - _root - root to +use for the backend + +And it may have this parameter - _obscure - comma separated strings for +parameters to obscure + +For example the program might take this on STDIN + + { + "user": "me", + "pass": "mypassword" + } + +And return this on STDOUT + + { + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" + } + +This would mean that an SFTP backend would be created on the fly for the +user and pass returned in the output to the host given. Note that since +_obscure is set to pass, rclone will obscure the pass parameter before +creating the backend (which is required for sftp backends). + +The progam can manipulate the supplied user in any way, for example to +make proxy to many different sftp backends, you could make the user be +user@example.com and then set the host to example.com in the output and +the user to user. For security you’d probably want to restrict the host +to a limited list. + +Note that an internal cache is keyed on user so only use that for +configuration, don’t use pass. This also means that if a user’s password +is changed the cache will need to expire (which takes 5 mins) before it +takes effect. + +This can be used to build general purpose proxies to any kind of backend +that rclone supports. + rclone serve ftp remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") + --auth-proxy string A program to use to create the backend from the auth. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) @@ -3024,12 +3165,12 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) +See the global flags page for global options not listed here. + SEE ALSO - rclone serve - Serve a remote over a protocol. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone serve http @@ -3066,6 +3207,13 @@ transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +–baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used –baseurl “/rclone” then +rclone would serve from a URL starting with “/rclone/”. This is useful +if you wish to proxy rclone serve. Rclone automatically inserts leading +and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” +and –baseurl “/rclone/” are all treated identically. + Authentication By default this will serve files without needing a login. @@ -3235,6 +3383,7 @@ If an upload or download fails it will be retried up to Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) @@ -3264,12 +3413,12 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) +See the global flags page for global options not listed here. + SEE ALSO - rclone serve - Serve a remote over a protocol. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone serve restic @@ -3369,6 +3518,13 @@ transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +–baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used –baseurl “/rclone” then +rclone would serve from a URL starting with “/rclone/”. This is useful +if you wish to proxy rclone serve. Rclone automatically inserts leading +and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” +and –baseurl “/rclone/” are all treated identically. + Authentication By default this will serve files without needing a login. @@ -3408,6 +3564,7 @@ Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --append-only disallow deletion of repository data + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic @@ -3422,12 +3579,12 @@ Options --stdio run an HTTP2 server on stdin/stdout --user string User name for authentication. +See the global flags page for global options not listed here. + SEE ALSO - rclone serve - Serve a remote over a protocol. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone serve sftp @@ -3597,11 +3754,70 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. +Auth Proxy + +If you supply the parameter --auth-proxy /path/to/program then rclone +will use that program to generate backends on the fly which then are +used to authenticate incoming requests. This uses a simple JSON based +protocl with input on STDIN and output on STDOUT. + +There is an example program bin/test_proxy.py in the rclone source code. + +The program’s job is to take a user and pass on the input and turn those +into the config for a backend on STDOUT in JSON format. This config will +have any default parameters for the backend added, but it won’t use +configuration from environment variables or command line options - it is +the job of the proxy program to make a complete config. + +This config generated must have this extra parameter - _root - root to +use for the backend + +And it may have this parameter - _obscure - comma separated strings for +parameters to obscure + +For example the program might take this on STDIN + + { + "user": "me", + "pass": "mypassword" + } + +And return this on STDOUT + + { + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" + } + +This would mean that an SFTP backend would be created on the fly for the +user and pass returned in the output to the host given. Note that since +_obscure is set to pass, rclone will obscure the pass parameter before +creating the backend (which is required for sftp backends). + +The progam can manipulate the supplied user in any way, for example to +make proxy to many different sftp backends, you could make the user be +user@example.com and then set the host to example.com in the output and +the user to user. For security you’d probably want to restrict the host +to a limited list. + +Note that an internal cache is keyed on user so only use that for +configuration, don’t use pass. This also means that if a user’s password +is changed the cache will need to expire (which takes 5 mins) before it +takes effect. + +This can be used to build general purpose proxies to any kind of backend +that rclone supports. + rclone serve sftp remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022") + --auth-proxy string A program to use to create the backend from the auth. --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) @@ -3626,12 +3842,12 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) +See the global flags page for global options not listed here. + SEE ALSO - rclone serve - Serve a remote over a protocol. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone serve webdav @@ -3674,6 +3890,13 @@ transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +–baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used –baseurl “/rclone” then +rclone would serve from a URL starting with “/rclone/”. This is useful +if you wish to proxy rclone serve. Rclone automatically inserts leading +and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” +and –baseurl “/rclone/” are all treated identically. + Authentication By default this will serve files without needing a login. @@ -3838,11 +4061,71 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. +Auth Proxy + +If you supply the parameter --auth-proxy /path/to/program then rclone +will use that program to generate backends on the fly which then are +used to authenticate incoming requests. This uses a simple JSON based +protocl with input on STDIN and output on STDOUT. + +There is an example program bin/test_proxy.py in the rclone source code. + +The program’s job is to take a user and pass on the input and turn those +into the config for a backend on STDOUT in JSON format. This config will +have any default parameters for the backend added, but it won’t use +configuration from environment variables or command line options - it is +the job of the proxy program to make a complete config. + +This config generated must have this extra parameter - _root - root to +use for the backend + +And it may have this parameter - _obscure - comma separated strings for +parameters to obscure + +For example the program might take this on STDIN + + { + "user": "me", + "pass": "mypassword" + } + +And return this on STDOUT + + { + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" + } + +This would mean that an SFTP backend would be created on the fly for the +user and pass returned in the output to the host given. Note that since +_obscure is set to pass, rclone will obscure the pass parameter before +creating the backend (which is required for sftp backends). + +The progam can manipulate the supplied user in any way, for example to +make proxy to many different sftp backends, you could make the user be +user@example.com and then set the host to example.com in the output and +the user to user. For security you’d probably want to restrict the host +to a limited list. + +Note that an internal cache is keyed on user so only use that for +configuration, don’t use pass. This also means that if a user’s password +is changed the cache will need to expire (which takes 5 mins) before it +takes effect. + +This can be used to build general purpose proxies to any kind of backend +that rclone supports. + rclone serve webdav remote:path [flags] Options --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --auth-proxy string A program to use to create the backend from the auth. + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) @@ -3874,12 +4157,12 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off) +See the global flags page for global options not listed here. + SEE ALSO - rclone serve - Serve a remote over a protocol. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone settier @@ -3916,12 +4199,12 @@ Options -h, --help help for settier +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone touch @@ -3939,12 +4222,12 @@ Options -C, --no-create Do not create the file if it does not exist. -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05) +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - rclone tree @@ -4001,12 +4284,12 @@ Options -U, --unsorted Leave files unsorted. --version Sort files alphanumerically by version. +See the global flags page for global options not listed here. + SEE ALSO - rclone - Show help for rclone commands, flags and backends. -Auto generated by spf13/cobra on 15-Jun-2019 - Copying single files @@ -4224,6 +4507,8 @@ If running rclone from a script you might want to use today’s date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today’s date. +See --compare-dest and --copy-dest. + –bind string Local address to bind to for outgoing connections. This can be an IPv4 @@ -4342,6 +4627,18 @@ quicker than without the --checksum flag. When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally. +–compare-dest=DIR + +When using sync, copy or move DIR is checked in addition to the +destination for files. If a file identical to the source is found that +file is NOT copied from source. This is useful to copy just files that +have changed since the last backup. + +You must use the same remote as the destination of the sync. The compare +directory must not overlap the destination directory. + +See --copy-dest and --backup-dir. + –config=CONFIG_FILE Specify the location of the rclone config file. @@ -4370,6 +4667,19 @@ The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default. +–copy-dest=DIR + +When using sync, copy or move DIR is checked in addition to the +destination for files. If a file identical to the source is found that +file is server side copied from DIR to the destination. This is useful +for incremental backup. + +The remote in use must support server side copy and you must use the +same remote as the destination of the sync. The compare directory must +not overlap the destination directory. + +See --compare-dest and --backup-dir. + –dedupe-mode MODE Mode to run dedupe command in. One of interactive, skip, first, newest, @@ -4505,6 +4815,11 @@ warnings and significant events. ERROR is equivalent to -q. It only outputs error messages. +–use-json-log + +This switches the log format to JSON for rclone. The fields of json log +are level, msg, source, time. + –low-level-retries NUMBER This controls the number of low level retries rclone does. @@ -4603,6 +4918,9 @@ rclone serve if --vfs-cache-mode is set to writes or above. NB that this ONLY works for a local destination but will work with any source. +NB that multi thread copies are disabled for local to local copies as +they are faster without unless --multi-thread-streams is set explicitly. + –multi-thread-streams=N When using multi thread downloads (see above --multi-thread-cutoff) this @@ -4771,11 +5089,23 @@ The default is bytes. –suffix=SUFFIX -This is for use with --backup-dir only. If this isn’t set then ---backup-dir will move files with their original name. If it is set then -the files will have SUFFIX added on to them. +When using sync, copy or move any files which would have been +overwritten or deleted will have the suffix added to them. If there is a +file with the same path (after the suffix has been added), then it will +be overwritten. -See --backup-dir for more info. +The remote in use must support server side move or copy and you must use +the same remote as the destination of the sync. + +This is for use with files to add the suffix in the current directory or +with --backup-dir. See --backup-dir for more info. + +For example + + rclone sync /path/to/local/file remote:current --suffix .bak + +will sync /path/to/local to remote:current, but for any files which +would have been updated or deleted have .bak added. –suffix-keep-extension @@ -4936,15 +5266,16 @@ If an existing destination file has a modification time equal (within the computed modify window precision) to the source file’s, it will be updated if the sizes are different. -On remotes which don’t support mod time directly the time checked will -be the uploaded time. This means that if uploading to one of these -remotes, rclone will skip any files which exist on the destination and -have an uploaded time that is newer than the modification time of the -source file. +On remotes which don’t support mod time directly (or when using +--use-server-mod-time) the time checked will be the uploaded time. This +means that if uploading to one of these remotes, rclone will skip any +files which exist on the destination and have an uploaded time that is +newer than the modification time of the source file. This can be useful when transferring to a remote which doesn’t support -mod times directly as it is more accurate than a --size-only check and -faster than using --checksum. +mod times directly (or when using --use-server-mod-time to avoid extra +API calls) as it is more accurate than a --size-only check and faster +than using --checksum. –use-mmap @@ -4970,10 +5301,14 @@ will make an API call to retrieve the metadata when the modtime is needed by an operation. Use this flag to disable the extra API call and rely instead on the -server’s modified time. In cases such as a local to remote sync, knowing -the local file is newer than the time it was last uploaded to the remote -is sufficient. In those cases, this flag can speed up the process and -reduce the number of API calls necessary. +server’s modified time. In cases such as a local to remote sync using +--update, knowing the local file is newer than the time it was last +uploaded to the remote is sufficient. In those cases, this flag can +speed up the process and reduce the number of API calls necessary. + +Using this flag on a sync operation without also using --update would +cause all files modified at any time other than the last upload time to +be uploaded again, which is probably not what you want. -v, -vv, –verbose @@ -5742,7 +6077,7 @@ You could then use it like this: This will transfer these files only (if they exist) /home/me/pics/file1.jpg → remote:pics/file1.jpg - /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg + /home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg To take a more complicated example, let’s say you had a few files you want to back up regularly with these absolute paths: @@ -5767,7 +6102,7 @@ files-from.txt like this: /home/user1/important → remote:backup/user1/important /home/user1/dir/file → remote:backup/user1/dir/file - /home/user2/stuff → remote:backup/stuff + /home/user2/stuff → remote:backup/user2/stuff You could of course choose / as the root too in which case your files-from.txt might look like this. @@ -5782,9 +6117,9 @@ And you would transfer it like this In this case there will be an extra home directory on the remote: - /home/user1/important → remote:home/backup/user1/important - /home/user1/dir/file → remote:home/backup/user1/dir/file - /home/user2/stuff → remote:home/backup/stuff + /home/user1/important → remote:backup/home/user1/important + /home/user1/dir/file → remote:backup/home/user1/dir/file + /home/user2/stuff → remote:backup/home/user2/stuff --min-size - Don’t transfer any file smaller than this @@ -5904,6 +6239,117 @@ should not be used multiple times. +GUI (EXPERIMENTAL) + + +Rclone can serve a web based GUI (graphical user interface). This is +somewhat experimental at the moment so things may be subject to change. + +Run this command in a terminal and rclone will download and then display +the GUI in a web browser. + + rclone rcd --rc-web-gui + +This will produce logs like this and rclone needs to continue to run to +serve the GUI: + + 2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip + 2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip] + 2019/08/25 11:40:16 NOTICE: Unzipping + 2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/ + +This assumes you are running rclone locally on your machine. It is +possible to separate the rclone and the GUI - see below for details. + +If you wish to update to the latest API version then you can add +--rc-web-gui-update to the command line. + + +Using the GUI + +Once the GUI opens, you will be looking at the dashboard which has an +overall overview. + +On the left hand side you will see a series of view buttons you can +click on: + +- Dashboard - main overview +- Configs - examine and create new configurations +- Explorer - view, download and upload files to the cloud storage + systems +- Backend - view or alter the backend config +- Log out + +(More docs and walkthrough video to come!) + + +How it works + +When you run the rclone rcd --rc-web-gui this is what happens + +- Rclone starts but only runs the remote control API (“rc”). +- The API is bound to localhost with an auto generated username and + password. +- If the API bundle is missing then rclone will download it. +- rclone will start serving the files from the API bundle over the + same port as the API +- rclone will open the browser with a login_token so it can log + straight in. + + +Advanced use + +The rclone rcd may use any of the flags documented on the rc page. + +The flag --rc-web-gui is shorthand for + +- Download the web GUI if necessary +- Check we are using some authentication +- --rc-user gui +- --rc-pass +- --rc-serve + +These flags can be overidden as desired. + +See also the rclone rcd documentation. + +Example: Running a public GUI + +For example the GUI could be served on a public port over SSL using an +htpasswd file using the following flags: + +- --rc-web-gui +- --rc-addr :443 +- --rc-htpasswd /path/to/htpasswd +- --rc-cert /path/to/ssl.crt +- --rc-key /path/to/ssl.key + +Example: Running a GUI behind a proxy + +If you want to run the GUI behind a proxy at /rclone you could use these +flags: + +- --rc-web-gui +- --rc-baseurl rclone +- --rc-htpasswd /path/to/htpasswd + +Or instead of htpassword if you just want a single user and password: + +- --rc-user me +- --rc-pass mypassword + + +Project + +The GUI is being developed in the: rclone/rclone-webui-react +respository. + +Bug reports and contributions very welcome welcome :-) + +If you have questions then please ask them on the rclone forum. + + + REMOTE CONTROLLING RCLONE @@ -5988,6 +6434,35 @@ the authorization in the URL in the http://user:pass@localhost/ style. Default Off. +–rc-web-gui + +Set this flag to serve the default web gui on the same port as rclone. + +Default Off. + +–rc-allow-origin + +Set the allowed Access-Control-Allow-Origin for rc requests. + +Can be used with –rc-web-gui if the rclone is running on different IP +than the web-gui. + +Default is IP address on which rc is running. + +–rc-web-fetch-url + +Set the URL to fetch the rclone-web-gui files from. + +Default +https://api.github.com/repos/rclone/rclone-webui-react/releases/latest. + +–rc-web-gui-update + +Set this flag to Download / Force update rclone-webui-react from the +rc-web-fetch-url. + +Default Off. + –rc-job-expire-duration=DURATION Expire finished async jobs older than DURATION (default 60s). @@ -6051,6 +6526,10 @@ commands. These start with _ to show they are different. Running asynchronous jobs with _async = true +Each rc call is classified as a job and it is assigned its own id. By +default jobs are executed immediately as they are created or +synchronously. + If _async has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The job/status call can be used to get information of the @@ -6105,10 +6584,27 @@ job/list can be used to show the running or recently completed jobs ] } +Assigning operations to groups with _group = + +Each rc call has it’s own stats group for tracking it’s metrics. By +default grouping is done by the composite group name from prefix job/ +and id of the job like so job/1. + +If _group has a value then stats for that request will be grouped under +that value. This allows caller to group stats under their own name. + +Stats for specific group can be accessed by passing group to core/stats: + + $ rclone rc --json '{ "group": "job/1" }' core/stats + { + "speed": 12345 + ... + } + Supported commands -cache/expire: Purge a remote from cache +cache/expire: Purge a remote from cache {#cache/expire} Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = @@ -6119,7 +6615,7 @@ Eg rclone rc cache/expire remote=path/to/sub/folder/ rclone rc cache/expire remote=/ withData=true -cache/fetch: Fetch file chunks +cache/fetch: Fetch file chunks {#cache/fetch} Ensure the specified file chunks are cached on disk. @@ -6145,11 +6641,11 @@ files to fetch, eg File names will automatically be encrypted when the a crypt remote is used on top of the cache. -cache/stats: Get cache stats +cache/stats: Get cache stats {#cache/stats} Show statistics for the cache remote. -config/create: create the config for a remote. +config/create: create the config for a remote. {#config/create} This takes the following parameters @@ -6160,7 +6656,7 @@ See the config create command command for more information on the above. Authentication is required for this call. -config/delete: Delete a remote in the config file. +config/delete: Delete a remote in the config file. {#config/delete} Parameters: - name - name of remote to delete @@ -6168,7 +6664,7 @@ See the config delete command command for more information on the above. Authentication is required for this call. -config/dump: Dumps the config file. +config/dump: Dumps the config file. {#config/dump} Returns a JSON object: - key: value @@ -6178,7 +6674,7 @@ See the config dump command command for more information on the above. Authentication is required for this call. -config/get: Get a remote in the config file. +config/get: Get a remote in the config file. {#config/get} Parameters: - name - name of remote to get @@ -6186,7 +6682,7 @@ See the config dump command command for more information on the above. Authentication is required for this call. -config/listremotes: Lists the remotes in the config file. +config/listremotes: Lists the remotes in the config file. {#config/listremotes} Returns - remotes - array of remote names @@ -6194,7 +6690,7 @@ See the listremotes command command for more information on the above. Authentication is required for this call. -config/password: password the config for a remote. +config/password: password the config for a remote. {#config/password} This takes the following parameters @@ -6205,7 +6701,7 @@ above. Authentication is required for this call. -config/providers: Shows how providers are configured in the config file. +config/providers: Shows how providers are configured in the config file. {#config/providers} Returns a JSON object: - providers - array of objects @@ -6214,7 +6710,7 @@ above. Authentication is required for this call. -config/update: update the config for a remote. +config/update: update the config for a remote. {#config/update} This takes the following parameters @@ -6224,89 +6720,147 @@ See the config update command command for more information on the above. Authentication is required for this call. -core/bwlimit: Set the bandwidth limit. +core/bwlimit: Set the bandwidth limit. {#core/bwlimit} This sets the bandwidth limit to that passed in. Eg - rclone rc core/bwlimit rate=1M rclone rc core/bwlimit rate=off + { + "bytesPerSecond": -1, + "rate": "off" + } + rclone rc core/bwlimit rate=1M + { + "bytesPerSecond": 1048576, + "rate": "1M" + } + +If the rate parameter is not suppied then the bandwidth is queried + + rclone rc core/bwlimit + { + "bytesPerSecond": 1048576, + "rate": "1M" + } The format of the parameter is exactly the same as passed to –bwlimit except only one bandwidth may be specified. -core/gc: Runs a garbage collection. +In either case “rate” is returned as a human readable string, and +“bytesPerSecond” is returned as a number. + +core/gc: Runs a garbage collection. {#core/gc} This tells the go runtime to do a garbage collection run. It isn’t necessary to call this normally, but it can be useful for debugging memory problems. -core/memstats: Returns the memory statistics +core/group-list: Returns list of stats. {#core/group-list} -This returns the memory statistics of the running program. What the -values mean are explained in the go docs: -https://golang.org/pkg/runtime/#MemStats - -The most interesting values for most people are: - -- HeapAlloc: This is the amount of memory rclone is actually using -- HeapSys: This is the amount of memory rclone has obtained from the - OS -- Sys: this is the total amount of memory requested from the OS - - It is virtual memory so may include unused memory - -core/obscure: Obscures a string passed in. - -Pass a clear string and rclone will obscure it for the config file: - -clear - string - -Returns - obscured - string - -core/pid: Return PID of current process - -This returns PID of current process. Useful for stopping rclone process. - -core/stats: Returns stats about current transfers. - -This returns all available stats - - rclone rc core/stats +This returns list of stats groups currently in memory. Returns the following values: { - "speed": average speed in bytes/sec since start of the process, - "bytes": total transferred bytes since the start of the process, - "errors": number of errors, - "fatalError": whether there has been at least one FatalError, - "retryError": whether there has been at least one non-NoRetryError, - "checks": number of checked files, - "transfers": number of transferred files, - "deletes" : number of deleted files, - "elapsedTime": time in seconds since the start of the process, - "lastError": last occurred error, - "transferring": an array of currently active file transfers: + "groups": an array of group names: [ - { - "bytes": total transferred bytes for this file, - "eta": estimated time in seconds until file transfer completion - "name": name of the file, - "percentage": progress of the file transfer in percent, - "speed": speed in bytes/sec, - "speedAvg": speed in bytes/sec as an exponentially weighted moving average, - "size": size of the file in bytes - } - ], - "checking": an array of names of currently active file checks - [] + "group1", + "group2", + ... + ] } -Values for “transferring”, “checking” and “lastError” are only assigned -if data is available. The value for “eta” is null if an eta cannot be -determined. + ### core/memstats: Returns the memory statistics {#core/memstats} -core/version: Shows the current version of rclone and the go runtime. + This returns the memory statistics of the running program. What the values mean + are explained in the go docs: https://golang.org/pkg/runtime/#MemStats + + The most interesting values for most people are: + + * HeapAlloc: This is the amount of memory rclone is actually using + * HeapSys: This is the amount of memory rclone has obtained from the OS + * Sys: this is the total amount of memory requested from the OS + * It is virtual memory so may include unused memory + + ### core/obscure: Obscures a string passed in. {#core/obscure} + + Pass a clear string and rclone will obscure it for the config file: + - clear - string + + Returns + - obscured - string + + ### core/pid: Return PID of current process {#core/pid} + + This returns PID of current process. + Useful for stopping rclone process. + + ### core/stats: Returns stats about current transfers. {#core/stats} + + This returns all available stats: + + rclone rc core/stats + + If group is not provided then summed up stats for all groups will be + returned. + + Parameters + - group - name of the stats group (string) + + Returns the following values: + +{ “speed”: average speed in bytes/sec since start of the process, +“bytes”: total transferred bytes since the start of the process, +“errors”: number of errors, “fatalError”: whether there has been at +least one FatalError, “retryError”: whether there has been at least one +non-NoRetryError, “checks”: number of checked files, “transfers”: number +of transferred files, “deletes” : number of deleted files, +“elapsedTime”: time in seconds since the start of the process, +“lastError”: last occurred error, “transferring”: an array of currently +active file transfers: [ { “bytes”: total transferred bytes for this +file, “eta”: estimated time in seconds until file transfer completion +“name”: name of the file, “percentage”: progress of the file transfer in +percent, “speed”: speed in bytes/sec, “speedAvg”: speed in bytes/sec as +an exponentially weighted moving average, “size”: size of the file in +bytes } ], “checking”: an array of names of currently active file checks +[] } + + Values for "transferring", "checking" and "lastError" are only assigned if data is available. + The value for "eta" is null if an eta cannot be determined. + + ### core/stats-reset: Reset stats. {#core/stats-reset} + + This clears counters and errors for all stats or specific stats group if group + is provided. + + Parameters + - group - name of the stats group (string) + + ### core/transferred: Returns stats about completed transfers. {#core/transferred} + + This returns stats about completed transfers: + + rclone rc core/transferred + + If group is not provided then completed transfers for all groups will be + returned. + + Parameters + - group - name of the stats group (string) + + Returns the following values: + +{ “transferred”: an array of completed transfers (including failed +ones): [ { “name”: name of the file, “size”: size of the file in bytes, +“bytes”: total transferred bytes for this file, “checked”: if the +transfer is only checked (skipped, deleted), “timestamp”: integer +representing millisecond unix epoch, “error”: string description of the +error (empty if successfull), “jobid”: id of the job that this transfer +belongs to } ] } + +core/version: Shows the current version of rclone and the go runtime. {#core/version} This shows the current version of go and the go runtime - version - rclone version, eg “v1.44” - decomposed - version number as [major, @@ -6316,13 +6870,13 @@ git version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use -job/list: Lists the IDs of the running jobs +job/list: Lists the IDs of the running jobs {#job/list} Parameters - None Results - jobids - array of integer job ids -job/status: Reads the status of the job ID +job/status: Reads the status of the job ID {#job/status} Parameters - jobid - id of the job (integer) @@ -6333,9 +6887,14 @@ empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean - true for success false otherwise - output - output of the job as would -have been returned if called synchronously +have been returned if called synchronously - progress - output of the +progress related to the underlying job -operations/about: Return the space used on the remote +job/stop: Stop the running job {#job/stop} + +Parameters - jobid - id of the job (integer) + +operations/about: Return the space used on the remote {#operations/about} This takes the following parameters @@ -6347,7 +6906,7 @@ See the about command command for more information on the above. Authentication is required for this call. -operations/cleanup: Remove trashed files in the remote or path +operations/cleanup: Remove trashed files in the remote or path {#operations/cleanup} This takes the following parameters @@ -6357,7 +6916,7 @@ See the cleanup command command for more information on the above. Authentication is required for this call. -operations/copyfile: Copy a file from source remote to destination remote +operations/copyfile: Copy a file from source remote to destination remote {#operations/copyfile} This takes the following parameters @@ -6369,7 +6928,7 @@ This takes the following parameters Authentication is required for this call. -operations/copyurl: Copy the URL to the object +operations/copyurl: Copy the URL to the object {#operations/copyurl} This takes the following parameters @@ -6381,7 +6940,7 @@ See the copyurl command command for more information on the above. Authentication is required for this call. -operations/delete: Remove files in the path +operations/delete: Remove files in the path {#operations/delete} This takes the following parameters @@ -6391,7 +6950,7 @@ See the delete command command for more information on the above. Authentication is required for this call. -operations/deletefile: Remove the single file pointed to +operations/deletefile: Remove the single file pointed to {#operations/deletefile} This takes the following parameters @@ -6402,7 +6961,7 @@ See the deletefile command command for more information on the above. Authentication is required for this call. -operations/fsinfo: Return information about the remote +operations/fsinfo: Return information about the remote {#operations/fsinfo} This takes the following parameters @@ -6458,7 +7017,7 @@ instead: rclone rc --loopback operations/fsinfo fs=remote: -operations/list: List the given remote and path in JSON format +operations/list: List the given remote and path in JSON format {#operations/list} This takes the following parameters @@ -6480,7 +7039,7 @@ See the lsjson command for more information on the above and examples. Authentication is required for this call. -operations/mkdir: Make a destination directory or container +operations/mkdir: Make a destination directory or container {#operations/mkdir} This takes the following parameters @@ -6491,7 +7050,7 @@ See the mkdir command command for more information on the above. Authentication is required for this call. -operations/movefile: Move a file from source remote to destination remote +operations/movefile: Move a file from source remote to destination remote {#operations/movefile} This takes the following parameters @@ -6503,7 +7062,7 @@ This takes the following parameters Authentication is required for this call. -operations/publiclink: Create or retrieve a public link to the given file or folder. +operations/publiclink: Create or retrieve a public link to the given file or folder. {#operations/publiclink} This takes the following parameters @@ -6518,7 +7077,7 @@ See the link command command for more information on the above. Authentication is required for this call. -operations/purge: Remove a directory or container and all of its contents +operations/purge: Remove a directory or container and all of its contents {#operations/purge} This takes the following parameters @@ -6529,7 +7088,7 @@ See the purge command command for more information on the above. Authentication is required for this call. -operations/rmdir: Remove an empty directory or container +operations/rmdir: Remove an empty directory or container {#operations/rmdir} This takes the following parameters @@ -6540,7 +7099,7 @@ See the rmdir command command for more information on the above. Authentication is required for this call. -operations/rmdirs: Remove all the empty directories in the path +operations/rmdirs: Remove all the empty directories in the path {#operations/rmdirs} This takes the following parameters @@ -6552,7 +7111,7 @@ See the rmdirs command command for more information on the above. Authentication is required for this call. -operations/size: Count the number of bytes and files in remote +operations/size: Count the number of bytes and files in remote {#operations/size} This takes the following parameters @@ -6567,11 +7126,11 @@ See the size command command for more information on the above. Authentication is required for this call. -options/blocks: List all the option blocks +options/blocks: List all the option blocks {#options/blocks} Returns - options - a list of the options block names -options/get: Get all the options +options/get: Get all the options {#options/get} Returns an object where keys are option block names and values are an object with the current option values in. @@ -6579,7 +7138,7 @@ object with the current option values in. This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. -options/set: Set an option +options/set: Set an option {#options/set} Parameters @@ -6606,23 +7165,23 @@ And this sets NOTICE level logs (normal without -v) rclone rc options/set --json '{"main": {"LogLevel": 6}}' -rc/error: This returns an error +rc/error: This returns an error {#rc/error} This returns an error with the input as part of its error string. Useful for testing error handling. -rc/list: List all the registered remote control commands +rc/list: List all the registered remote control commands {#rc/list} This lists all the registered remote control commands as a JSON map in the commands response. -rc/noop: Echo the input to the output parameters +rc/noop: Echo the input to the output parameters {#rc/noop} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. -rc/noopauth: Echo the input to the output parameters requiring auth +rc/noopauth: Echo the input to the output parameters requiring auth {#rc/noopauth} This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to @@ -6630,7 +7189,7 @@ check that parameter passing is working properly. Authentication is required for this call. -sync/copy: copy a directory from source remote to destination remote +sync/copy: copy a directory from source remote to destination remote {#sync/copy} This takes the following parameters @@ -6641,7 +7200,7 @@ See the copy command command for more information on the above. Authentication is required for this call. -sync/move: move a directory from source remote to destination remote +sync/move: move a directory from source remote to destination remote {#sync/move} This takes the following parameters @@ -6653,7 +7212,7 @@ See the move command command for more information on the above. Authentication is required for this call. -sync/sync: sync a directory from source remote to destination remote +sync/sync: sync a directory from source remote to destination remote {#sync/sync} This takes the following parameters @@ -6664,7 +7223,7 @@ See the sync command command for more information on the above. Authentication is required for this call. -vfs/forget: Forget files or directories in the directory cache. +vfs/forget: Forget files or directories in the directory cache. {#vfs/forget} This forgets the paths in the directory cache causing them to be re-read from the remote when needed. @@ -6680,7 +7239,7 @@ will forget that dir, eg rclone rc vfs/forget file=hello file2=goodbye dir=home/junk -vfs/poll-interval: Get the status or update the value of the poll-interval option. +vfs/poll-interval: Get the status or update the value of the poll-interval option. {#vfs/poll-interval} Without any parameter given this returns the current status of the poll-interval setting. @@ -6701,7 +7260,7 @@ reached. If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote. -vfs/refresh: Refresh the directory cache. +vfs/refresh: Refresh the directory cache. {#vfs/refresh} This reads the directories for the specified paths and freshens the directory cache. @@ -6921,6 +7480,7 @@ Here is an overview of the major features of each cloud storage system. Name Hash ModTime Case Insensitive Duplicate Files MIME Type ------------------------------ -------------- --------- ------------------ ----------------- ----------- + 1Fichier Whirlpool No No Yes R Amazon Drive MD5 No Yes No R Amazon S3 MD5 Yes No No R/W Backblaze B2 SHA1 Yes No No R/W @@ -6929,6 +7489,7 @@ Here is an overview of the major features of each cloud storage system. FTP - No No No - Google Cloud Storage MD5 Yes No No R/W Google Drive MD5 Yes No Yes R/W + Google Photos - No No Yes R HTTP - No No No R Hubic MD5 Yes No No R/W Jottacloud MD5 Yes Yes No R/W @@ -6939,6 +7500,8 @@ Here is an overview of the major features of each cloud storage system. OpenDrive MD5 Yes Yes No - Openstack Swift MD5 Yes No No R/W pCloud MD5, SHA1 Yes No No W + premiumize.me - No Yes No R + put.io CRC-32 Yes No Yes R QingStor MD5 No No No R/W SFTP MD5, SHA1 ‡ Yes Depends No - WebDAV MD5, SHA1 †† Yes ††† Depends No - @@ -7031,30 +7594,34 @@ All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient. - Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About - ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- ------- - Amazon Drive Yes No Yes Yes No #575 No No No #2178 No - Amazon S3 No Yes No No No Yes Yes No #2178 No - Backblaze B2 No Yes No No Yes Yes Yes No #2178 No - Box Yes Yes Yes Yes No #575 No Yes Yes No - Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes - FTP No No Yes Yes No No Yes No #2178 No - Google Cloud Storage Yes Yes No No No Yes Yes No #2178 No - Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes - HTTP No No No No No No No No #2178 No - Hubic Yes † Yes No No No Yes Yes No #2178 Yes - Jottacloud Yes Yes Yes Yes No Yes No Yes Yes - Mega Yes No Yes Yes Yes No No No #2178 Yes - Microsoft Azure Blob Storage Yes Yes No No No Yes No No #2178 No - Microsoft OneDrive Yes Yes Yes Yes No #575 No No Yes Yes - OpenDrive Yes Yes Yes Yes No No No No No - Openstack Swift Yes † Yes No No No Yes Yes No #2178 Yes - pCloud Yes Yes Yes Yes Yes No No No #2178 Yes - QingStor No Yes No No No Yes No No #2178 No - SFTP No No Yes Yes No No Yes No #2178 Yes - WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 Yes - Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes - The local filesystem Yes No Yes Yes No No Yes No Yes + Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About EmptyDir + ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- ------- ---------- + 1Fichier No No No No No No No No No Yes + Amazon Drive Yes No Yes Yes No #575 No No No #2178 No Yes + Amazon S3 No Yes No No No Yes Yes No #2178 No No + Backblaze B2 No Yes No No Yes Yes Yes Yes No No + Box Yes Yes Yes Yes No #575 No Yes Yes No Yes + Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes Yes + FTP No No Yes Yes No No Yes No #2178 No Yes + Google Cloud Storage Yes Yes No No No Yes Yes No #2178 No No + Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes + Google Photos No No No No No No No No No No + HTTP No No No No No No No No #2178 No Yes + Hubic Yes † Yes No No No Yes Yes No #2178 Yes No + Jottacloud Yes Yes Yes Yes No Yes No Yes Yes Yes + Mega Yes No Yes Yes Yes No No No #2178 Yes Yes + Microsoft Azure Blob Storage Yes Yes No No No Yes No No #2178 No No + Microsoft OneDrive Yes Yes Yes Yes No #575 No No Yes Yes Yes + OpenDrive Yes Yes Yes Yes No No No No No Yes + Openstack Swift Yes † Yes No No No Yes Yes No #2178 Yes No + pCloud Yes Yes Yes Yes Yes No No No #2178 Yes Yes + premiumize.me Yes No Yes Yes No No No Yes Yes Yes + put.io Yes No Yes Yes Yes No Yes No #2178 Yes Yes + QingStor No Yes No No No Yes No No #2178 No No + SFTP No No Yes Yes No No Yes No #2178 Yes Yes + WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 Yes Yes + Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes Yes + The local filesystem Yes No Yes Yes No No Yes No Yes Yes Purge @@ -7126,6 +7693,486 @@ This is also used to return the space used, available for rclone mount. If the server can’t do About then rclone about will return an error. +EmptyDir + +The remote supports empty directories. See Limitations for details. Most +Object/Bucket based remotes do not support this. + + + +GLOBAL FLAGS + + +This describes the global flags available to every rclone command split +into two groups, non backend and backend flags. + + +Non Backend Flags + +These flags are available for every command. + + --ask-password Allow prompt for password for encrypted configuration. (default true) + --auto-confirm If enabled, do not request console confirmation. + --backup-dir string Make backups into hierarchy based in DIR. + --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. + --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M) + --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. + --ca-cert string CA certificate used to verify servers + --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") + --checkers int Number of checkers to run in parallel. (default 8) + -c, --checksum Skip based on checksum (if available) & size, not mod-time & size + --client-cert string Client SSL certificate (PEM) for mutual TLS auth + --client-key string Client SSL private key (PEM) for mutual TLS auth + --compare-dest string use DIR to server side copy flies from. + --config string Config file. (default "$HOME/.config/rclone/rclone.conf") + --contimeout duration Connect timeout (default 1m0s) + --copy-dest string Compare dest to DIR also. + --cpuprofile string Write cpu profile to file + --delete-after When synchronizing, delete files on destination after transferring (default) + --delete-before When synchronizing, delete files on destination before transferring + --delete-during When synchronizing, delete files during transfer + --delete-excluded Delete files on dest excluded from sync + --disable string Disable a comma separated list of features. Use help to see a list. + -n, --dry-run Do a trial run with no permanent changes + --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles + --dump-bodies Dump HTTP headers and bodies - may contain sensitive info + --dump-headers Dump HTTP headers - may contain sensitive info + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read exclude patterns from file + --exclude-if-present string Exclude directories if filename is present + --fast-list Use recursive list if available. Uses more memory but fewer transactions. + --files-from stringArray Read list of source-file names from file + -f, --filter stringArray Add a file-filtering rule + --filter-from stringArray Read filtering patterns from a file + --ignore-case Ignore case in filters (case insensitive) + --ignore-case-sync Ignore case when synchronizing + --ignore-checksum Skip post copy check of checksums. + --ignore-errors delete even if there are I/O errors + --ignore-existing Skip all files that exist on destination + --ignore-size Ignore size when skipping use mod-time or checksum. + -I, --ignore-times Don't skip files that match size and time - transfer all files + --immutable Do not modify files. Fail if existing files have been modified. + --include stringArray Include files matching pattern + --include-from stringArray Read include patterns from file + --log-file string Log everything to this file + --log-format string Comma separated list of log format options (default "date,time") + --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") + --low-level-retries int Number of low level retries to do. (default 10) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-backlog int Maximum number of objects in sync or check backlog. (default 10000) + --max-delete int When synchronizing, limit the number of deletes (default -1) + --max-depth int If set limits the recursion depth to this. (default -1) + --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000) + --max-transfer SizeSuffix Maximum size of data to transfer. (default off) + --memprofile string Write memory profile to file + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off) + --modify-window duration Max time diff to be considered the same (default 1ns) + --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M) + --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4) + --no-check-certificate Do not verify the server SSL certificate. Insecure. + --no-gzip-encoding Don't set Accept-Encoding: gzip. + --no-traverse Don't traverse destination file system on copy. + --no-update-modtime Don't update destination mod-time if files identical. + -P, --progress Show progress during transfer. + -q, --quiet Print as little stuff as possible + --rc Enable the remote control server. + --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-allow-origin string Set the allowed origin for CORS. + --rc-baseurl string Prefix for URLs - leave blank for root. + --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) + --rc-client-ca string Client certificate authority to verify clients with + --rc-files string Path to local files to serve on the HTTP server. + --rc-htpasswd string htpasswd file - if not provided no authentication is done + --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s) + --rc-job-expire-interval duration interval to check for expired async jobs (default 10s) + --rc-key string SSL PEM Private key + --rc-max-header-bytes int Maximum size of request header (default 4096) + --rc-no-auth Don't require auth for certain methods. + --rc-pass string Password for authentication. + --rc-realm string realm for authentication (default "rclone") + --rc-serve Enable the serving of remote objects. + --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) + --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) + --rc-user string User name for authentication. + --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-update Update / Force update to latest version of web gui + --retries int Retry operations this many times if they fail (default 3) + --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) + --size-only Skip based on size only, not mod-time or checksum + --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) + --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45) + --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") + --stats-one-line Make the stats fit on one line. + --stats-one-line-date Enables --stats-one-line and add current date/time prefix. + --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format + --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") + --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) + --suffix string Suffix to add to changed files. + --suffix-keep-extension Preserve the extension when using --suffix. + --syslog Use Syslog for logging + --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --timeout duration IO idle timeout (default 5m0s) + --tpslimit float Limit HTTP transactions per second to this. + --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) + --track-renames When synchronizing, track file renames and do a server side move if possible + --transfers int Number of file transfers to run in parallel. (default 4) + -u, --update Skip files that are newer on the destination. + --use-cookies Enable session cookiejar. + --use-json-log Use json log format. + --use-mmap Use mmap allocator (see docs). + --use-server-modtime Use server modified time instead of object metadata + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0") + -v, --verbose count Print lots more stuff (repeat for more) + + +Backend Flags + +These flags are available for every command. They control the backends +and may be set in the config file. + + --acd-auth-url string Auth server URL. + --acd-client-id string Amazon Application Client ID. + --acd-client-secret string Amazon Application Client Secret. + --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) + --acd-token-url string Token server url. + --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) + --alias-remote string Remote or path to alias. + --azureblob-access-tier string Access tier of blob: hot, cool or archive. + --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator) + --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) + --azureblob-endpoint string Endpoint for the service + --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator) + --azureblob-list-chunk int Size of blob list. (default 5000) + --azureblob-sas-url string SAS URL for container level access only + --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) + --b2-account string Account ID or Application Key ID + --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) + --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w) + --b2-download-url string Custom endpoint for downloads. + --b2-endpoint string Endpoint for the service. + --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. + --b2-key string Application Key + --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. + --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) + --b2-versions Include old versions in directory listings. + --box-client-id string Box App Client Id. + --box-client-secret string Box App Client Secret + --box-commit-retries int Max number of times to try committing a multipart file. (default 100) + --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) + --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) + --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. + --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") + --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) + --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) + --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") + --cache-db-purge Clear all the cached data for this remote on start. + --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) + --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server + --cache-plex-password string The password of the Plex user + --cache-plex-url string The URL of the Plex server + --cache-plex-username string The username of the Plex user + --cache-read-retries int How many times to retry a read from a cache storage. (default 10) + --cache-remote string Remote to cache. + --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) + --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. + --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) + --cache-workers int How many workers should run in parallel to download chunks. (default 4) + --cache-writes Cache file data on writes through the FS + -L, --copy-links Follow symlinks and copy the pointed to item. + --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) + --crypt-filename-encryption string How to encrypt the filenames. (default "standard") + --crypt-password string Password or pass phrase for encryption. + --crypt-password2 string Password or pass phrase for salt. Optional but recommended. + --crypt-remote string Remote to encrypt/decrypt. + --crypt-show-mapping For all files listed show how the names encrypt. + --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. + --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. + --drive-alternate-export Use alternate export URLs for google documents export., + --drive-auth-owner-only Only consider files owned by the authenticated user. + --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) + --drive-client-id string Google Application Client Id + --drive-client-secret string Google Application Client Secret + --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") + --drive-formats string Deprecated: see export_formats + --drive-impersonate string Impersonate this user when using a service account. + --drive-import-formats string Comma separated list of preferred formats for uploading Google docs. + --drive-keep-revision-forever Keep new head revision of each file forever. + --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) + --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100) + --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) + --drive-root-folder-id string ID of the root folder + --drive-scope string Scope that rclone should use when requesting access from drive. + --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs. + --drive-service-account-credentials string Service Account Credentials JSON blob + --drive-service-account-file string Service Account Credentials JSON file path + --drive-shared-with-me Only show files that are shared with me. + --drive-size-as-quota Show storage quota usage for file size. + --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only. + --drive-skip-gdocs Skip google documents in all listings. + --drive-team-drive string ID of the Team Drive + --drive-trashed-only Only show files that are in the trash. + --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) + --drive-use-created-date Use file created date instead of modified date., + --drive-use-trash Send files to the trash instead of deleting permanently. (default true) + --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) + --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) + --dropbox-client-id string Dropbox App Client Id + --dropbox-client-secret string Dropbox App Client Secret + --dropbox-impersonate string Impersonate this user when using a business account. + --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl + --fichier-shared-folder string If you want to download a shared folder, add this parameter + --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-host string FTP host to connect to + --ftp-no-check-certificate Do not verify the TLS certificate of the server + --ftp-pass string FTP password + --ftp-port string FTP port, leave blank to use default (21) + --ftp-tls Use FTP over TLS (Implicit) + --ftp-user string FTP username, leave blank for current username, $USER + --gcs-bucket-acl string Access Control List for new buckets. + --gcs-bucket-policy-only Access checks should use bucket-level IAM policies. + --gcs-client-id string Google Application Client Id + --gcs-client-secret string Google Application Client Secret + --gcs-location string Location for the newly created buckets. + --gcs-object-acl string Access Control List for new objects. + --gcs-project-number string Project number. + --gcs-service-account-file string Service Account Credentials JSON file path + --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. + --gphotos-client-id string Google Application Client Id + --gphotos-client-secret string Google Application Client Secret + --gphotos-read-only Set to make the Google Photos backend read only. + --gphotos-read-size Set to read the size of media items. + --http-headers CommaSepList Set HTTP headers for all transactions + --http-no-slash Set this if the site doesn't end directories with / + --http-url string URL of http host to connect to + --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --hubic-client-id string Hubic Client Id + --hubic-client-secret string Hubic Client Secret + --hubic-no-chunk Don't chunk files during streaming upload. + --jottacloud-hard-delete Delete files permanently rather than putting them into the trash. + --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) + --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. + --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) + --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net") + --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used. + --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) + --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true) + --koofr-user string Your Koofr user name + -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension + --local-case-insensitive Force the filesystem to report itself as case insensitive + --local-case-sensitive Force the filesystem to report itself as case sensitive. + --local-no-check-updated Don't check to see if the files change during upload + --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) + --local-nounc string Disable UNC (long path names) conversion on Windows + --mega-debug Output more debug from Mega. + --mega-hard-delete Delete files permanently rather than putting them into the trash. + --mega-pass string Password. + --mega-user string User name + -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). + --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) + --onedrive-client-id string Microsoft App Client Id + --onedrive-client-secret string Microsoft App Client Secret + --onedrive-drive-id string The ID of the drive to use + --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) + --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. + --opendrive-password string Password. + --opendrive-username string Username + --pcloud-client-id string Pcloud App Client Id + --pcloud-client-secret string Pcloud App Client Secret + --qingstor-access-key-id string QingStor Access Key ID + --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M) + --qingstor-connection-retries int Number of connection retries. (default 3) + --qingstor-endpoint string Enter a endpoint URL to connection QingStor API. + --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + --qingstor-secret-access-key string QingStor Secret Access Key (password) + --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1) + --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --qingstor-zone string Zone to connect to. + --s3-access-key-id string AWS Access Key ID. + --s3-acl string Canned ACL used when creating buckets and storing or copying objects. + --s3-bucket-acl string Canned ACL used when creating buckets. + --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) + --s3-disable-checksum Don't store MD5 checksum with object metadata + --s3-endpoint string Endpoint for S3 API. + --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + --s3-force-path-style If true use path style access if false use virtual hosted style. (default true) + --s3-location-constraint string Location constraint - must be set to match the Region. + --s3-provider string Choose your S3 provider. + --s3-region string Region to connect to. + --s3-secret-access-key string AWS Secret Access Key (password) + --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. + --s3-session-token string An AWS session token + --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. + --s3-storage-class string The storage class to use when storing new objects in S3. + --s3-upload-concurrency int Concurrency for multipart uploads. (default 4) + --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M) + --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. + --s3-v2-auth If true use v2 authentication. + --sftp-ask-password Allow asking for SFTP password when needed. + --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. + --sftp-host string SSH host to connect to + --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. + --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. + --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect. + --sftp-pass string SSH password, leave blank to use ssh-agent. + --sftp-path-override string Override path used by SSH connection. + --sftp-port string SSH port, leave blank to use default (22) + --sftp-set-modtime Set the modified time on the remote if set. (default true) + --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect. + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. + --sftp-user string SSH username, leave blank for current username, ncw + --skip-links Don't warn about skipped symlinks. + --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) + --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) + --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) + --swift-auth string Authentication URL for server (OS_AUTH_URL). + --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) + --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") + --swift-env-auth Get swift credentials from environment variables in standard OpenStack form. + --swift-key string API key or password (OS_PASSWORD). + --swift-no-chunk Don't chunk files during streaming upload. + --swift-region string Region name - optional (OS_REGION_NAME) + --swift-storage-policy string The storage policy to use when creating a new container + --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) + --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + --swift-user string User name to log in (OS_USERNAME). + --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). + --union-remotes string List of space separated remotes. + --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-pass string Password. + --webdav-url string URL of http host to connect to + --webdav-user string User name + --webdav-vendor string Name of the Webdav site/service/software you are using + --yandex-client-id string Yandex Client Id + --yandex-client-secret string Yandex Client Secret + --yandex-unlink Remove existing public link to file/folder with link command rather than creating. + + +1Fichier + +This is a backend for the 1ficher cloud storage service. Note that a +Premium subscription is required to use the API. + +Paths are specified as remote:path + +Paths may be as deep as required, eg remote:directory/subdirectory. + +The initial setup for 1Fichier involves getting the API key from the +website which you need to do in your browser. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / 1Fichier + \ "fichier" + [snip] + Storage> fichier + ** See help for fichier backend at: https://rclone.org/fichier/ ** + + Your API Key, get it from https://1fichier.com/console/params.pl + Enter a string value. Press Enter for the default (""). + api_key> example_key + + Edit advanced config? (y/n) + y) Yes + n) No + y/n> + Remote config + -------------------- + [remote] + type = fichier + api_key = example_key + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Once configured you can then use rclone like this, + +List directories in top level of your 1Fichier account + + rclone lsd remote: + +List all the files in your 1Fichier account + + rclone ls remote: + +To copy a local directory to a 1Fichier directory called backup + + rclone copy /home/source remote:backup + +Modified time and hashes + +1Fichier does not support modification times. It supports the Whirlpool +hash algorithm. + +Duplicated files + +1Fichier can have two files with exactly the same name and path (unlike +a normal file system). + +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. + +Forbidden characters + +1Fichier does not support the characters \ < > " ' ` $ and spaces at the +beginning of folder names. rclone automatically escapes these to a +unicode equivalent. The exception is /, which cannot be escaped and will +therefore lead to errors. + +Standard Options + +Here are the standard options specific to fichier (1Fichier). + +–fichier-api-key + +Your API Key, get it from https://1fichier.com/console/params.pl + +- Config: api_key +- Env Var: RCLONE_FICHIER_API_KEY +- Type: string +- Default: "" + +Advanced Options + +Here are the advanced options specific to fichier (1Fichier). + +–fichier-shared-folder + +If you want to download a shared folder, add this parameter + +- Config: shared_folder +- Env Var: RCLONE_FICHIER_SHARED_FOLDER +- Type: string +- Default: "" + Alias @@ -7162,51 +8209,11 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote + [snip] + XX / Alias for an existing remote \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" - 5 / Box - \ "box" - 6 / Cache a remote - \ "cache" - 7 / Dropbox - \ "dropbox" - 8 / Encrypt/Decrypt a remote - \ "crypt" - 9 / FTP Connection - \ "ftp" - 10 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 11 / Google Drive - \ "drive" - 12 / Hubic - \ "hubic" - 13 / Local Disk - \ "local" - 14 / Microsoft Azure Blob Storage - \ "azureblob" - 15 / Microsoft OneDrive - \ "onedrive" - 16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 17 / Pcloud - \ "pcloud" - 18 / QingCloud Object Storage - \ "qingstor" - 19 / SSH/SFTP Connection - \ "sftp" - 20 / Webdav - \ "webdav" - 21 / Yandex Disk - \ "yandex" - 22 / http Connection - \ "http" - Storage> 1 + [snip] + Storage> alias Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". remote> /mnt/storage/backup @@ -7321,35 +8328,11 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive + [snip] + XX / Amazon Drive \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" - 10 / Local Disk - \ "local" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / SSH/SFTP Connection - \ "sftp" - 14 / Yandex Disk - \ "yandex" - Storage> 1 + [snip] + Storage> amazon cloud drive Amazon Application Client Id - required. client_id> your client ID goes here Amazon Application Client Secret - required. @@ -7607,17 +8590,10 @@ This will guide you through an interactive setup process. name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" [snip] - 23 / http Connection - \ "http" + XX / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) + \ "s3" + [snip] Storage> s3 Choose your S3 provider. Choose a number from below, or type in your own value @@ -7771,6 +8747,8 @@ This will guide you through an interactive setup process. \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" + 8 / Intelligent-Tiering storage class + \ "INTELLIGENT_TIERING" storage_class> 1 Remote config -------------------- @@ -7902,6 +8880,9 @@ permissions are required to be available on the bucket being written to: - PutObject - PutObjectACL +When using the lsd subcommand, the ListAllMyBuckets permission is +required. + Example policy: { @@ -7923,7 +8904,12 @@ Example policy: "arn:aws:s3:::BUCKET_NAME/*", "arn:aws:s3:::BUCKET_NAME" ] - } + }, + { + "Effect": "Allow", + "Action": "s3:ListAllMyBuckets", + "Resource": "arn:aws:s3:::*" + } ] } @@ -8503,6 +9489,8 @@ The storage class to use when storing new objects in S3. - Glacier storage class - “DEEP_ARCHIVE” - Glacier Deep Archive storage class + - “INTELLIGENT_TIERING” + - Intelligent-Tiering storage class –s3-storage-class @@ -9079,9 +10067,8 @@ rclone like this. name> wasabi Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) + [snip] + XX / Amazon S3 (also Dreamhost, Ceph, Minio) \ "s3" [snip] Storage> s3 @@ -9311,33 +10298,11 @@ generating and using an Application Key. name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 + [snip] + XX / Backblaze B2 \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 3 + [snip] + Storage> b2 Account ID or Application Key ID account> 123456789abc Application Key @@ -9560,6 +10525,26 @@ the nearest millisecond appended to them. Note that when using --b2-versions no file write operations are permitted, so you can’t upload files or delete them. +B2 and rclone link + +Rclone supports generating file share links for private B2 buckets. They +can either be for a file for example: + + ./rclone link B2:bucket/path/to/file.txt + https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx + +or if run on a directory you will get: + + ./rclone link B2:bucket/path + https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx + +you can then use the authorization token (the part of the url from the +?Authorization= on) on any file path under that directory. For example: + + https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx + https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx + https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx + Standard Options Here are the standard options specific to b2 (Backblaze B2). @@ -9675,14 +10660,28 @@ Disable checksums for large (> upload cutoff) files Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free -egress for data downloaded through the Cloudflare network. Leave blank -if you want to use the endpoint provided by Backblaze. +egress for data downloaded through the Cloudflare network. This is +probably only useful for a public bucket. Leave blank if you want to use +the endpoint provided by Backblaze. - Config: download_url - Env Var: RCLONE_B2_DOWNLOAD_URL - Type: string - Default: "" +–b2-download-auth-duration + +Time before the authorization token will expire in s or suffix +ms|s|m|h|d. + +The duration before the download authorization token will expire. The +minimum value is 1 second. The maximum value is one week. + +- Config: download_auth_duration +- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION +- Type: Duration +- Default: 1w + Box @@ -9707,38 +10706,10 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box + [snip] + XX / Box \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" - 10 / Hubic - \ "hubic" - 11 / Local Disk - \ "local" - 12 / Microsoft OneDrive - \ "onedrive" - 13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 14 / SSH/SFTP Connection - \ "sftp" - 15 / Yandex Disk - \ "yandex" - 16 / http Connection - \ "http" + [snip] Storage> box Box App Client Id - leave blank normally. client_id> @@ -9980,11 +10951,11 @@ This will guide you through an interactive setup process: name> test-cache Type of storage to configure. Choose a number from below, or type in your own value - ... - 5 / Cache a remote + [snip] + XX / Cache a remote \ "cache" - ... - Storage> 5 + [snip] + Storage> cache Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -10596,33 +11567,11 @@ differentiate it from the remote. name> secret Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote + [snip] + XX / Encrypt/Decrypt a remote \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 5 + [snip] + Storage> crypt Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). @@ -11083,33 +12032,11 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox + [snip] + XX / Dropbox \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 4 + [snip] + Storage> dropbox Dropbox App Key - leave blank normally. app_key> Dropbox App Secret - leave blank normally. @@ -11267,7 +12194,7 @@ your email address as the password. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - 10 / FTP Connection + XX / FTP Connection \ "ftp" [snip] Storage> ftp @@ -11457,33 +12384,11 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) + [snip] + XX / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 6 + [snip] + Storage> google cloud storage Google Application Client Id - leave blank normally. client_id> Google Application Client Secret - leave blank normally. @@ -11888,7 +12793,7 @@ This will guide you through an interactive setup process: Type of storage to configure. Choose a number from below, or type in your own value [snip] - 10 / Google Drive + XX / Google Drive \ "drive" [snip] Storage> drive @@ -12809,7 +13714,7 @@ Here is how to create your own Google Drive client ID for rclone: 2. Select a project or create a new project. 3. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the - then “Google Drive API”. + “Google Drive API”. 4. Click “Credentials” in the left-side panel (not “Create credentials”, which opens the wizard), then “Create credentials”, @@ -12825,6 +13730,352 @@ Here is how to create your own Google Drive client ID for rclone: (Thanks to @balazer on github for these instructions.) +Google Photos + +The rclone backend for Google Photos is a specialized backend for +transferring photos and videos to and from Google Photos. + +NB The Google Photos API which rclone uses has quite a few limitations, +so please read the limitations section carefully to make sure it is +suitable for your use. + + +Configuring Google Photos + +The initial setup for google cloud storage involves getting a token from +Google Photos which you need to do in your browser. rclone config walks +you through it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / Google Photos + \ "google photos" + [snip] + Storage> google photos + ** See help for google photos backend at: https://rclone.org/googlephotos/ ** + + Google Application Client Id + Leave blank normally. + Enter a string value. Press Enter for the default (""). + client_id> + Google Application Client Secret + Leave blank normally. + Enter a string value. Press Enter for the default (""). + client_secret> + Set to make the Google Photos backend read only. + + If you choose read only then rclone will only request read only access + to your photos, otherwise rclone will request full access. + Enter a boolean value (true or false). Press Enter for the default ("false"). + read_only> + Edit advanced config? (y/n) + y) Yes + n) No + y/n> n + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + + *** IMPORTANT: All media items uploaded to Google Photos with rclone + *** are stored in full resolution at original quality. These uploads + *** will count towards storage in your Google Account. + + -------------------- + [remote] + type = google photos + token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on http://127.0.0.1:53682/ and this may +require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +This remote is called remote and can now be used like this + +See all the albums in your photos + + rclone lsd remote:album + +Make a new album + + rclone mkdir remote:album/newAlbum + +List the contents of an album + + rclone ls remote:album/newAlbum + +Sync /home/local/images to the Google Photos, removing any excess files +in the album. + + rclone sync /home/local/image remote:album/newAlbum + + +Layout + +As Google Photos is not a general purpose cloud storage system the +backend is laid out to help you navigate it. + +The directories under media show different ways of categorizing the +media. Each file will appear multiple times. So if you want to make a +backup of your google photos you might choose to backup +remote:media/by-month. (NB remote:media/by-day is rather slow at the +moment so avoid for syncing.) + +Note that all your photos and videos will appear somewhere under media, +but they may not appear under album unless you’ve put them into albums. + + / + - upload + - file1.jpg + - file2.jpg + - ... + - media + - all + - file1.jpg + - file2.jpg + - ... + - by-year + - 2000 + - file1.jpg + - ... + - 2001 + - file2.jpg + - ... + - ... + - by-month + - 2000 + - 2000-01 + - file1.jpg + - ... + - 2000-02 + - file2.jpg + - ... + - ... + - by-day + - 2000 + - 2000-01-01 + - file1.jpg + - ... + - 2000-01-02 + - file2.jpg + - ... + - ... + - album + - album name + - album name/sub + - shared-album + - album name + - album name/sub + +There are two writable parts of the tree, the upload directory and sub +directories of the the album directory. + +The upload directory is for uploading files you don’t want to put into +albums. This will be empty to start with and will contain the files +you’ve uploaded for one rclone session only, becoming empty again when +you restart rclone. The use case for this would be if you have a load of +files you just want to once off dump into Google Photos. For repeated +syncing, uploading to album will work better. + +Directories within the album directory are also writeable and you may +create new directories (albums) under album. If you copy files with a +directory hierarchy in there then rclone will create albums with the / +character in them. For example if you do + + rclone copy /path/to/images remote:album/images + +and the images directory contains + + images + - file1.jpg + dir + file2.jpg + dir2 + dir3 + file3.jpg + +Then rclone will create the following albums with the following files in + +- images + - file1.jpg +- images/dir + - file2.jpg +- images/dir2/dir3 + - file3.jpg + +This means that you can use the album path pretty much like a normal +filesystem and it is a good target for repeated syncing. + +The shared-album directory shows albums shared with you or by you. This +is similar to the Sharing tab in the Google Photos web interface. + + +Limitations + +Only images and videos can be uploaded. If you attempt to upload non +videos or images or formats that Google Photos doesn’t understand, +rclone will upload the file, then Google Photos will give an error when +it is put turned into a media item. + +Note that all media items uploaded to Google Photos through the API are +stored in full resolution at “original quality” and WILL count towards +your storage quota in your Google Account. The API does NOT offer a way +to upload in “high quality” mode.. + +Downloading Images + +When Images are downloaded this strips EXIF location (according to the +docs and my tests). This is a limitation of the Google Photos API and is +covered by bug #112096115. + +Downloading Videos + +When videos are downloaded they are downloaded in a really compressed +version of the video compared to downloading it via the Google Photos +web interface. This is covered by bug #113672044. + +Duplicates + +If a file name is duplicated in a directory then rclone will add the +file ID into its name. So two files called file.jpg would then appear as +file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer +alas!). + +If you upload the same image (with the same binary data) twice then +Google Photos will deduplicate it. However it will retain the filename +from the first upload which may confuse rclone. For example if you +uploaded an image to upload then uploaded the same image to +album/my_album the filename of the image in album/my_album will be what +it was uploaded with initially, not what you uploaded it with to album. +In practise this shouldn’t cause too many problems. + +Modified time + +The date shown of media in Google Photos is the creation date as +determined by the EXIF information, or the upload date if that is not +known. + +This is not changeable by rclone and is not the modification date of the +media on local disk. This means that rclone cannot use the dates from +Google Photos for syncing purposes. + +Size + +The Google Photos API does not return the size of media. This means that +when syncing to Google Photos, rclone can only do a file existence +check. + +It is possible to read the size of the media, but this needs an extra +HTTP HEAD request per media item so is very slow and uses up a lot of +transactions. This can be enabled with the --gphotos-read-size option or +the read_size = true config parameter. + +If you want to use the backend with rclone mount you will need to enable +this flag otherwise you will not be able to read media off the mount. + +Albums + +Rclone can only upload files to albums it created. This is a limitation +of the Google Photos API. + +Rclone can remove files it uploaded from albums it created only. + +Deleting files + +Rclone can remove files from albums it created, but note that the Google +Photos API does not allow media to be deleted permanently so this media +will still remain. See bug #109759781. + +Rclone cannot delete files anywhere except under album. + +Deleting albums + +The Google Photos API does not support deleting albums - see bug +#135714733. + +Standard Options + +Here are the standard options specific to google photos (Google Photos). + +–gphotos-client-id + +Google Application Client Id Leave blank normally. + +- Config: client_id +- Env Var: RCLONE_GPHOTOS_CLIENT_ID +- Type: string +- Default: "" + +–gphotos-client-secret + +Google Application Client Secret Leave blank normally. + +- Config: client_secret +- Env Var: RCLONE_GPHOTOS_CLIENT_SECRET +- Type: string +- Default: "" + +–gphotos-read-only + +Set to make the Google Photos backend read only. + +If you choose read only then rclone will only request read only access +to your photos, otherwise rclone will request full access. + +- Config: read_only +- Env Var: RCLONE_GPHOTOS_READ_ONLY +- Type: bool +- Default: false + +Advanced Options + +Here are the advanced options specific to google photos (Google Photos). + +–gphotos-read-size + +Set to read the size of media items. + +Normally rclone does not read the size of media items since this takes +another transaction. This isn’t necessary for syncing. However rclone +mount needs to know the size of files in advance of reading them, so +setting this flag when using rclone mount is recommended if you want to +read the media. + +- Config: read_size +- Env Var: RCLONE_GPHOTOS_READ_SIZE +- Type: bool +- Default: false + + HTTP The HTTP remote is a read only remote for reading files of a webserver. @@ -12850,36 +14101,10 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" - 10 / Local Disk - \ "local" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / SSH/SFTP Connection - \ "sftp" - 14 / Yandex Disk - \ "yandex" - 15 / http Connection + [snip] + XX / http Connection \ "http" + [snip] Storage> http URL of http host to connect to Choose a number from below, or type in your own value @@ -12966,6 +14191,26 @@ Advanced Options Here are the advanced options specific to http (http Connection). +–http-headers + +Set HTTP headers for all transactions + +Use this to set additional HTTP headers for all transactions + +The input format is comma separated list of key,value pairs. Standard +CSV encoding may be used. + +For example to set a Cookie use ‘Cookie,name=value’, or +‘“Cookie”,“name=value”’. + +You can set multiple headers, eg +‘“Cookie”,“name=value”,“Authorization”,“xxx”’. + +- Config: headers +- Env Var: RCLONE_HTTP_HEADERS +- Type: CommaSepList +- Default: + –http-no-slash Set this if the site doesn’t end directories with / @@ -13010,33 +14255,11 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic + [snip] + XX / Hubic \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk - \ "yandex" - Storage> 8 + [snip] + Storage> hubic Hubic Client Id - leave blank normally. client_id> Hubic Client Secret - leave blank normally. @@ -13199,15 +14422,12 @@ This will guide you through an interactive setup process: Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - 14 / JottaCloud + XX / JottaCloud \ "jottacloud" [snip] Storage> jottacloud ** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** - User Name: - Enter a string value. Press Enter for the default (""). - user> user@email.tld Edit advanced config? (y/n) y) Yes n) No @@ -13221,6 +14441,7 @@ This will guide you through an interactive setup process: y) Yes n) No y/n> y + Username> 0xC4KE@gmail.com Your Jottacloud password is only required during setup and will not be stored. password: @@ -13232,7 +14453,7 @@ This will guide you through an interactive setup process: Please select the device to use. Normally this will be Jotta Choose a number from below, or type in an existing value 1 > DESKTOP-3H31129 - 2 > test1 + 2 > fla1 3 > Jotta Devices> 3 Please select the mountpoint to user. Normally this will be Archive @@ -13335,19 +14556,6 @@ setting up the remote. Please be aware that this also means that copying the rclone config from one machine to another does NOT work with Jottacloud accounts. You have to create it on each machine. -Standard Options - -Here are the standard options specific to jottacloud (JottaCloud). - -–jottacloud-user - -User Name: - -- Config: user -- Env Var: RCLONE_JOTTACLOUD_USER -- Type: string -- Default: "" - Advanced Options Here are the advanced options specific to jottacloud (JottaCloud). @@ -13436,60 +14644,10 @@ This will guide you through an interactive setup process: Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 1 / A stackable unification remote, which can appear to merge the contents of several remotes - \ "union" - 2 / Alias for an existing remote - \ "alias" - 3 / Amazon Drive - \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc) - \ "s3" - 5 / Backblaze B2 - \ "b2" - 6 / Box - \ "box" - 7 / Cache a remote - \ "cache" - 8 / Dropbox - \ "dropbox" - 9 / Encrypt/Decrypt a remote - \ "crypt" - 10 / FTP Connection - \ "ftp" - 11 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 12 / Google Drive - \ "drive" - 13 / Hubic - \ "hubic" - 14 / JottaCloud - \ "jottacloud" - 15 / Koofr + [snip] + XX / Koofr \ "koofr" - 16 / Local Disk - \ "local" - 17 / Mega - \ "mega" - 18 / Microsoft Azure Blob Storage - \ "azureblob" - 19 / Microsoft OneDrive - \ "onedrive" - 20 / OpenDrive - \ "opendrive" - 21 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 22 / Pcloud - \ "pcloud" - 23 / QingCloud Object Storage - \ "qingstor" - 24 / SSH/SFTP Connection - \ "sftp" - 25 / Webdav - \ "webdav" - 26 / Yandex Disk - \ "yandex" - 27 / http Connection - \ "http" + [snip] Storage> koofr ** See help for koofr backend at: https://rclone.org/koofr/ ** @@ -13584,6 +14742,16 @@ Mount ID of the mount to use. If omitted, the primary mount is used. - Type: string - Default: "" +–koofr-setmtime + +Does the backend support setting modification time. Set this to false if +you use a mount ID that points to a Dropbox or Amazon Drive backend. + +- Config: setmtime +- Env Var: RCLONE_KOOFR_SETMTIME +- Type: bool +- Default: true + Limitations Note that Koofr is case insensitive so you can’t have a file called @@ -13618,14 +14786,10 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" [snip] - 14 / Mega + XX / Mega \ "mega" [snip] - 23 / http Connection - \ "http" Storage> mega User name user> you@example.com @@ -13810,40 +14974,10 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" - 10 / Hubic - \ "hubic" - 11 / Local Disk - \ "local" - 12 / Microsoft Azure Blob Storage + [snip] + XX / Microsoft Azure Blob Storage \ "azureblob" - 13 / Microsoft OneDrive - \ "onedrive" - 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 15 / SSH/SFTP Connection - \ "sftp" - 16 / Yandex Disk - \ "yandex" - 17 / http Connection - \ "http" + [snip] Storage> azureblob Storage Account Name account> account_name @@ -13962,7 +15096,7 @@ Blob Storage). –azureblob-account -Storage Account Name (leave blank to use connection string or SAS URL) +Storage Account Name (leave blank to use SAS URL or Emulator) - Config: account - Env Var: RCLONE_AZUREBLOB_ACCOUNT @@ -13971,7 +15105,7 @@ Storage Account Name (leave blank to use connection string or SAS URL) –azureblob-key -Storage Account Key (leave blank to use connection string or SAS URL) +Storage Account Key (leave blank to use SAS URL or Emulator) - Config: key - Env Var: RCLONE_AZUREBLOB_KEY @@ -13981,13 +15115,23 @@ Storage Account Key (leave blank to use connection string or SAS URL) –azureblob-sas-url SAS URL for container level access only (leave blank if using -account/key or connection string) +account/key or Emulator) - Config: sas_url - Env Var: RCLONE_AZUREBLOB_SAS_URL - Type: string - Default: "" +–azureblob-use-emulator + +Uses local storage emulator if provided as ‘true’ (leave blank if using +real azure storage endpoint) + +- Config: use_emulator +- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR +- Type: bool +- Default: false + Advanced Options Here are the advanced options specific to azureblob (Microsoft Azure @@ -14063,6 +15207,14 @@ Limitations MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. +Azure Storage Emulator Support + +You can test rlcone with storage emulator locally, to do this make sure +azure storage emulator installed locally and set up a new remote with +rclone config follow instructions described in introduction, set +use_emulator config as true, you do not need to provide default account +name or key if using emulator. + Microsoft OneDrive @@ -14092,11 +15244,11 @@ This will guide you through an interactive setup process: Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - ... - 18 / Microsoft OneDrive + [snip] + XX / Microsoft OneDrive \ "onedrive" - ... - Storage> 18 + [snip] + Storage> onedrive Microsoft App Client Id Leave blank normally. Enter a string value. Press Enter for the default (""). @@ -14407,35 +15559,11 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / OpenDrive + [snip] + XX / OpenDrive \ "opendrive" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / SSH/SFTP Connection - \ "sftp" - 14 / Yandex Disk - \ "yandex" - Storage> 10 + [snip] + Storage> opendrive Username username> Password @@ -14529,37 +15657,11 @@ This will guide you through an interactive setup process. name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" - 10 / Local Disk - \ "local" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / QingStor Object Storage + [snip] + XX / QingStor Object Storage \ "qingstor" - 14 / SSH/SFTP Connection - \ "sftp" - 15 / Yandex Disk - \ "yandex" - Storage> 13 + [snip] + Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step @@ -14817,48 +15919,10 @@ This will guide you through an interactive setup process. name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Cache a remote - \ "cache" - 6 / Dropbox - \ "dropbox" - 7 / Encrypt/Decrypt a remote - \ "crypt" - 8 / FTP Connection - \ "ftp" - 9 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 10 / Google Drive - \ "drive" - 11 / Hubic - \ "hubic" - 12 / Local Disk - \ "local" - 13 / Microsoft Azure Blob Storage - \ "azureblob" - 14 / Microsoft OneDrive - \ "onedrive" - 15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) + [snip] + XX / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" - 16 / Pcloud - \ "pcloud" - 17 / QingCloud Object Storage - \ "qingstor" - 18 / SSH/SFTP Connection - \ "sftp" - 19 / Webdav - \ "webdav" - 20 / Yandex Disk - \ "yandex" - 21 / http Connection - \ "http" + [snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value @@ -15338,44 +16402,10 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Box - \ "box" - 5 / Dropbox - \ "dropbox" - 6 / Encrypt/Decrypt a remote - \ "crypt" - 7 / FTP Connection - \ "ftp" - 8 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 9 / Google Drive - \ "drive" - 10 / Hubic - \ "hubic" - 11 / Local Disk - \ "local" - 12 / Microsoft Azure Blob Storage - \ "azureblob" - 13 / Microsoft OneDrive - \ "onedrive" - 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 15 / Pcloud + [snip] + XX / Pcloud \ "pcloud" - 16 / QingCloud Object Storage - \ "qingstor" - 17 / SSH/SFTP Connection - \ "sftp" - 18 / Yandex Disk - \ "yandex" - 19 / http Connection - \ "http" + [snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> @@ -15465,10 +16495,213 @@ Pcloud App Client Secret Leave blank normally. - Default: "" +premiumize.me + +Paths are specified as remote:path + +Paths may be as deep as required, eg remote:directory/subdirectory. + +The initial setup for premiumize.me involves getting a token from +premiumize.me which you need to do in your browser. rclone config walks +you through it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / premiumize.me + \ "premiumizeme" + [snip] + Storage> premiumizeme + ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** + + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + type = premiumizeme + token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> + +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from premiumize.me. This only runs from the moment it +opens your browser to the moment you get back the verification code. +This is on http://127.0.0.1:53682/ and this it may require you to +unblock it temporarily if you are running a host firewall. + +Once configured you can then use rclone like this, + +List directories in top level of your premiumize.me + + rclone lsd remote: + +List all the files in your premiumize.me + + rclone ls remote: + +To copy a local directory to an premiumize.me directory called backup + + rclone copy /home/source remote:backup + +Modified time and hashes + +premiumize.me does not support modification times or hashes, therefore +syncing will default to --size-only checking. Note that using --update +will work. + +Standard Options + +Here are the standard options specific to premiumizeme (premiumize.me). + +–premiumizeme-api-key + +API Key. + +This is not normally used - use oauth instead. + +- Config: api_key +- Env Var: RCLONE_PREMIUMIZEME_API_KEY +- Type: string +- Default: "" + +Limitations + +Note that premiumize.me is case insensitive so you can’t have a file +called “Hello.doc” and one called “hello.doc”. + +premiumize.me file names can’t have the \ or " characters in. rclone +maps these to and from an identical looking unicode equivalents \ and +" + +premiumize.me only supports filenames up to 255 characters in length. + + +put.io + +Paths are specified as remote:path + +put.io paths may be as deep as required, eg +remote:directory/subdirectory. + +The initial setup for put.io involves getting a token from put.io which +you need to do in your browser. rclone config walks you through it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found - make a new one + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> putio + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / Put.io + \ "putio" + [snip] + Storage> putio + ** See help for putio backend at: https://rclone.org/putio/ ** + + Remote config + Use auto config? + * Say Y if not sure + * Say N if you are working on a remote or headless machine + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [putio] + type = putio + token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + putio putio + + e) Edit existing remote + n) New remote + d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + e/n/d/r/c/s/q> q + +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. This only +runs from the moment it opens your browser to the moment you get back +the verification code. This is on http://127.0.0.1:53682/ and this it +may require you to unblock it temporarily if you are running a host +firewall, or use manual mode. + +You can then use it like this, + +List directories in top level of your put.io + + rclone lsd remote: + +List all the files in your put.io + + rclone ls remote: + +To copy a local directory to a put.io directory called backup + + rclone copy /home/source remote:backup + + SFTP SFTP is the Secure (or SSH) File Transfer Protocol. +The SFTP backend can be used with a number of different providers: + +- C14 +- rsync.net + SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. @@ -15494,36 +16727,10 @@ This will guide you through an interactive setup process. name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / FTP Connection - \ "ftp" - 7 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 8 / Google Drive - \ "drive" - 9 / Hubic - \ "hubic" - 10 / Local Disk - \ "local" - 11 / Microsoft OneDrive - \ "onedrive" - 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 13 / SSH/SFTP Connection + [snip] + XX / SSH/SFTP Connection \ "sftp" - 14 / Yandex Disk - \ "yandex" - 15 / http Connection - \ "http" + [snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value @@ -15533,22 +16740,22 @@ This will guide you through an interactive setup process. SSH username, leave blank for current username, ncw user> sftpuser SSH port, leave blank to use default (22) - port> + port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. - key_file> + key_file> Remote config -------------------- [remote] host = example.com user = sftpuser - port = - pass = - key_file = + port = + pass = + key_file = -------------------- y) Yes this is OK e) Edit this remote @@ -15706,8 +16913,10 @@ when the ssh-agent contains many keys. –sftp-use-insecure-cipher -Enable the use of the aes128-cbc cipher. This cipher is insecure and may -allow plaintext data to be recovered by an attacker. +Enable the use of the aes128-cbc cipher and +diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 +key exchange. Those algorithms are insecure and may allow plaintext data +to be recovered by an attacker. - Config: use_insecure_cipher - Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER @@ -15717,7 +16926,9 @@ allow plaintext data to be recovered by an attacker. - “false” - Use default Cipher list. - “true” - - Enables the use of the aes128-cbc cipher. + - Enables the use of the aes128-cbc cipher and + diffie-hellman-group-exchange-sha256, + diffie-hellman-group-exchange-sha1 key exchange. –sftp-disable-hashcheck @@ -15772,6 +16983,24 @@ Set the modified time on the remote if set. - Type: bool - Default: true +–sftp-md5sum-command + +The command used to read md5 hashes. Leave blank for autodetect. + +- Config: md5sum_command +- Env Var: RCLONE_SFTP_MD5SUM_COMMAND +- Type: string +- Default: "" + +–sftp-sha1sum-command + +The command used to read sha1 hashes. Leave blank for autodetect. + +- Config: sha1sum_command +- Env Var: RCLONE_SFTP_SHA1SUM_COMMAND +- Type: string +- Default: "" + Limitations SFTP supports checksums if the same login has shell access and md5sum or @@ -15809,6 +17038,20 @@ with it: --dump-headers, --dump-bodies, --dump-auth Note that --timeout isn’t supported (but --contimeout is). +C14 + +C14 is supported through the SFTP backend. + +See C14’s documentation + + +rsync.net + +rsync.net is supported through the SFTP backend. + +See rsync.net’s documentation of rclone examples. + + Union The union remote provides a unification similar to UnionFS using other @@ -15853,58 +17096,10 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio) - \ "s3" - 4 / Backblaze B2 - \ "b2" - 5 / Box - \ "box" - 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes + [snip] + XX / Union merges the contents of several remotes \ "union" - 7 / Cache a remote - \ "cache" - 8 / Dropbox - \ "dropbox" - 9 / Encrypt/Decrypt a remote - \ "crypt" - 10 / FTP Connection - \ "ftp" - 11 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 12 / Google Drive - \ "drive" - 13 / Hubic - \ "hubic" - 14 / JottaCloud - \ "jottacloud" - 15 / Local Disk - \ "local" - 16 / Mega - \ "mega" - 17 / Microsoft Azure Blob Storage - \ "azureblob" - 18 / Microsoft OneDrive - \ "onedrive" - 19 / OpenDrive - \ "opendrive" - 20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 21 / Pcloud - \ "pcloud" - 22 / QingCloud Object Storage - \ "qingstor" - 23 / SSH/SFTP Connection - \ "sftp" - 24 / Webdav - \ "webdav" - 25 / Yandex Disk - \ "yandex" - 26 / http Connection - \ "http" + [snip] Storage> union List of space separated remotes. Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc. @@ -15953,8 +17148,8 @@ will be placed into C:\dir3 Standard Options -Here are the standard options specific to union (A stackable unification -remote, which can appear to merge the contents of several remotes). +Here are the standard options specific to union (Union merges the +contents of several remotes). –union-remotes @@ -15993,7 +17188,7 @@ This will guide you through an interactive setup process: Type of storage to configure. Choose a number from below, or type in your own value [snip] - 22 / Webdav + XX / Webdav \ "webdav" [snip] Storage> webdav @@ -16025,7 +17220,7 @@ This will guide you through an interactive setup process: Confirm the password: password: Bearer token instead of user/pass (eg a Macaroon) - bearer_token> + bearer_token> Remote config -------------------- [remote] @@ -16034,7 +17229,7 @@ This will guide you through an interactive setup process: vendor = nextcloud user = user pass = *** ENCRYPTED *** - bearer_token = + bearer_token = -------------------- y) Yes this is OK e) Edit this remote @@ -16126,6 +17321,19 @@ Bearer token instead of user/pass (eg a Macaroon) - Type: string - Default: "" +Advanced Options + +Here are the advanced options specific to webdav (Webdav). + +–webdav-bearer-token-command + +Command to run to get a bearer token + +- Config: bearer_token_command +- Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND +- Type: string +- Default: "" + Provider notes @@ -16145,27 +17353,6 @@ This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (rcat) whereas Owncloud does. This may be fixed in the future. -Put.io - -put.io can be accessed in a read only way using webdav. - -Configure the url as https://webdav.put.io and use your normal account -username and password for user and pass. Set the vendor to other. - -Your config file should end up looking like this: - - [putio] - type = webdav - url = https://webdav.put.io - vendor = other - user = YourUserName - pass = encryptedpassword - -If you are using put.io with rclone mount then use the --read-only flag -to signal to the OS that it can’t write to the mount. - -For more help see the put.io webdav docs. - Sharepoint Rclone can be used with Sharepoint provided by OneDrive for Business or @@ -16205,8 +17392,11 @@ Your config file should look like this: dCache -dCache is a storage system with WebDAV doors that support, beside basic -and x509, authentication with Macaroons (bearer tokens). +dCache is a storage system that supports many protocols and +authentication/authorisation schemes. For WebDAV clients, it allows +users to authenticate with username and password (BASIC), X.509, +Kerberos, and various bearer tokens, including Macaroons and +OpenID-Connect access tokens. Configure as normal using the other type. Don’t enter a username or password, instead enter your Macaroon as the bearer_token. @@ -16224,6 +17414,49 @@ The config will end up looking something like this. There is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. +Macaroons may also be obtained from the dCacheView +web-browser/JavaScript client that comes with dCache. + +OpenID-Connect + +dCache also supports authenticating with OpenID-Connect access tokens. +OpenID-Connect is a protocol (based on OAuth 2.0) that allows services +to identify users who have authenticated with some central service. + +Support for OpenID-Connect in rclone is currently achieved using another +software package called oidc-agent. This is a command-line tool that +facilitates obtaining an access token. Once installed and configured, an +access token is obtained by running the oidc-token command. The +following example shows a (shortened) access token obtained from the +_XDC_ OIDC Provider. + + paul@celebrimbor:~$ oidc-token XDC + eyJraWQ[...]QFXDt0 + paul@celebrimbor:~$ + +NOTE Before the oidc-token command will work, the refresh token must be +loaded into the oidc agent. This is done with the oidc-add command +(e.g., oidc-add XDC). This is typically done once per login session. +Full details on this and how to register oidc-agent with your OIDC +Provider are provided in the oidc-agent documentation. + +The rclone bearer_token_command configuration option is used to fetch +the access token from oidc-agent. + +Configure as a normal WebDAV endpoint, using the ‘other’ vendor, leaving +the username and password empty. When prompted, choose to edit the +advanced config and enter the command to get a bearer token (e.g., +oidc-agent XDC). + +The following example config shows a WebDAV endpoint that uses +oidc-agent to supply an access token from the _XDC_ OIDC Provider. + + [dcache] + type = webdav + url = https://dcache.example.org/ + vendor = other + bearer_token_command = oidc-token XDC + Yandex Disk @@ -16245,33 +17478,11 @@ This will guide you through an interactive setup process: name> remote Type of storage to configure. Choose a number from below, or type in your own value - 1 / Amazon Drive - \ "amazon cloud drive" - 2 / Amazon S3 (also Dreamhost, Ceph, Minio) - \ "s3" - 3 / Backblaze B2 - \ "b2" - 4 / Dropbox - \ "dropbox" - 5 / Encrypt/Decrypt a remote - \ "crypt" - 6 / Google Cloud Storage (this is not Google Drive) - \ "google cloud storage" - 7 / Google Drive - \ "drive" - 8 / Hubic - \ "hubic" - 9 / Local Disk - \ "local" - 10 / Microsoft OneDrive - \ "onedrive" - 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH) - \ "swift" - 12 / SSH/SFTP Connection - \ "sftp" - 13 / Yandex Disk + [snip] + XX / Yandex Disk \ "yandex" - Storage> 13 + [snip] + Storage> yandex Yandex Client Id - leave blank normally. client_id> Yandex Client Secret - leave blank normally. @@ -16681,11 +17892,203 @@ Don’t cross filesystem boundaries (unix/macOS only). - Type: bool - Default: false +–local-case-sensitive + +Force the filesystem to report itself as case sensitive. + +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. Use this flag to +override the default choice. + +- Config: case_sensitive +- Env Var: RCLONE_LOCAL_CASE_SENSITIVE +- Type: bool +- Default: false + +–local-case-insensitive + +Force the filesystem to report itself as case insensitive + +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. Use this flag to +override the default choice. + +- Config: case_insensitive +- Env Var: RCLONE_LOCAL_CASE_INSENSITIVE +- Type: bool +- Default: false + CHANGELOG +v1.49.0 - 2019-08-26 + +- New backends + - 1fichier (Laura Hausmann) + - Google Photos (Nick Craig-Wood) + - Putio (Cenk Alti) + - premiumize.me (Nick Craig-Wood) +- New Features + - Experimental web GUI (Chaitanya Bankanhal) + - Implement --compare-dest & --copy-dest (yparitcher) + - Implement --suffix without --backup-dir for backup to current + dir (yparitcher) + - Add --use-json-log for JSON logging (justinalin) + - Add config reconnect, config userinfo and config disconnect + subcommands. (Nick Craig-Wood) + - Add context propagation to rclone (Aleksandar Jankovic) + - Reworking internal statistics interfaces so they work with rc + jobs (Aleksandar Jankovic) + - Add Higher units for ETA (AbelThar) + - Update rclone logos to new design (Andreas Chlupka) + - hash: Add CRC-32 support (Cenk Alti) + - help showbackend: Fixed advanced option category when there are + no standard options (buengese) + - ncdu: Display/Copy to Clipboard Current Path (Gary Kim) + - operations: + - Run hashing operations in parallel (Nick Craig-Wood) + - Don’t calculate checksums when using --ignore-checksum (Nick + Craig-Wood) + - Check transfer hashes when using --size-only mode (Nick + Craig-Wood) + - Disable multi thread copy for local to local copies (Nick + Craig-Wood) + - Debug successful hashes as well as failures (Nick + Craig-Wood) + - rc + - Add ability to stop async jobs (Aleksandar Jankovic) + - Return current settings if core/bwlimit called without + parameters (Nick Craig-Wood) + - Rclone-WebUI integration with rclone (Chaitanya Bankanhal) + - Added command line parameter to control the cross origin + resource sharing (CORS) in the rcd. (Security Improvement) + (Chaitanya Bankanhal) + - Add anchor tags to the docs so links are consistent (Nick + Craig-Wood) + - Remove _async key from input parameters after parsing so + later operations won’t get confused (buengese) + - Add call to clear stats (Aleksandar Jankovic) + - rcd + - Auto-login for web-gui (Chaitanya Bankanhal) + - Implement --baseurl for rcd and web-gui (Chaitanya + Bankanhal) + - serve dlna + - Only select interfaces which can multicast for SSDP (Nick + Craig-Wood) + - Add more builtin mime types to cover standard audio/video + (Nick Craig-Wood) + - Fix missing mime types on Android causing missing videos + (Nick Craig-Wood) + - serve ftp + - Refactor to bring into line with other serve commands (Nick + Craig-Wood) + - Implement --auth-proxy (Nick Craig-Wood) + - serve http: Implement --baseurl (Nick Craig-Wood) + - serve restic: Implement --baseurl (Nick Craig-Wood) + - serve sftp + - Implement auth proxy (Nick Craig-Wood) + - Fix detection of whether server is authorized (Nick + Craig-Wood) + - serve webdav + - Implement --baseurl (Nick Craig-Wood) + - Support --auth-proxy (Nick Craig-Wood) +- Bug Fixes + - Make “bad record MAC” a retriable error (Nick Craig-Wood) + - copyurl: Fix copying files that return HTTP errors (Nick + Craig-Wood) + - march: Fix checking sub-directories when using --no-traverse + (buengese) + - rc + - Fix unmarshalable http.AuthFn in options and put in test for + marshalability (Nick Craig-Wood) + - Move job expire flags to rc to fix initalization problem + (Nick Craig-Wood) + - Fix --loopback with rc/list and others (Nick Craig-Wood) + - rcat: Fix slowdown on systems with multiple hashes (Nick + Craig-Wood) + - rcd: Fix permissions problems on cache directory with web gui + download (Nick Craig-Wood) +- Mount + - Default --deamon-timout to 15 minutes on macOS and FreeBSD (Nick + Craig-Wood) + - Update docs to show mounting from root OK for bucket based (Nick + Craig-Wood) + - Remove nonseekable flag from write files (Nick Craig-Wood) +- VFS + - Make write without cache more efficient (Nick Craig-Wood) + - Fix --vfs-cache-mode minimal and writes ignoring cached files + (Nick Craig-Wood) +- Local + - Add --local-case-sensitive and --local-case-insensitive (Nick + Craig-Wood) + - Avoid polluting page cache when uploading local files to remote + backends (Michał Matczuk) + - Don’t calculate any hashes by default (Nick Craig-Wood) + - Fadvise run syscall on a dedicated go routine (Michał Matczuk) +- Azure Blob + - Azure Storage Emulator support (Sandeep) + - Updated config help details to remove connection string + references (Sandeep) + - Make all operations work from the root (Nick Craig-Wood) +- B2 + - Implement link sharing (yparitcher) + - Enable server side copy to copy between buckets (Nick + Craig-Wood) + - Make all operations work from the root (Nick Craig-Wood) +- Drive + - Fix server side copy of big files (Nick Craig-Wood) + - Update API for teamdrive use (Nick Craig-Wood) + - Add error for purge with --drive-trashed-only (ginvine) +- Fichier + - Make FolderID int and adjust related code (buengese) +- Google Cloud Storage + - Reduce oauth scope requested as suggested by Google (Nick + Craig-Wood) + - Make all operations work from the root (Nick Craig-Wood) +- HTTP + - Add --http-headers flag for setting arbitrary headers (Nick + Craig-Wood) +- Jottacloud + - Use new api for retrieving internal username (buengese) + - Refactor configuration and minor cleanup (buengese) +- Koofr + - Support setting modification times on Koofr backend. (jaKa) +- Opendrive + - Refactor to use existing lib/rest facilities for uploads (Nick + Craig-Wood) +- Qingstor + - Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood) + - Make all operations work from the root (Nick Craig-Wood) +- S3 + - Add INTELLIGENT_TIERING storage class (Matti Niemenmaa) + - Make all operations work from the root (Nick Craig-Wood) +- SFTP + - Add missing interface check and fix About (Nick Craig-Wood) + - Completely ignore all modtime checks if SetModTime=false (Jon + Fautley) + - Support md5/sha1 with rsync.net (Nick Craig-Wood) + - Save the md5/sha1 command in use to the config file for + efficiency (Nick Craig-Wood) + - Opt-in support for diffie-hellman-group-exchange-sha256 + diffie-hellman-group-exchange-sha1 (Yi FU) +- Swift + - Use FixRangeOption to fix 0 length files via the VFS (Nick + Craig-Wood) + - Fix upload when using no_chunk to return the correct size (Nick + Craig-Wood) + - Make all operations work from the root (Nick Craig-Wood) + - Fix segments leak during failed large file uploads. + (nguyenhuuluan434) +- WebDAV + - Add --webdav-bearer-token-command (Nick Craig-Wood) + - Refresh token when it expires with --webdav-bearer-token-command + (Nick Craig-Wood) + - Add docs for using bearer_token_command with oidc-agent (Paul + Millar) + + v1.48.0 - 2019-06-15 - New commands @@ -17154,13 +18557,13 @@ v1.45 - 2018-11-24 - Integration test framework revamped with a better report and better retries (Nick Craig-Wood) - Bug Fixes - - cmd: Make –progress update the stats correctly at the end (Nick + - cmd: Make --progress update the stats correctly at the end (Nick Craig-Wood) - config: Create config directory on save if it is missing (Nick Craig-Wood) - dedupe: Check for existing filename before renaming a dupe file (ssaqua) - - move: Don’t create directories with –dry-run (Nick Craig-Wood) + - move: Don’t create directories with --dry-run (Nick Craig-Wood) - operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood) - serve http/webdav/restic: Ensure rclone exits if the port is in @@ -17222,16 +18625,16 @@ v1.44 - 2018-10-15 - Show URL of backend help page when starting config (Nick Craig-Wood) - stats: Long names now split in center (Joanna Marek) - - Add –log-format flag for more control over log output (dcpu) + - Add --log-format flag for more control over log output (dcpu) - rc: Add support for OPTIONS and basic CORS (frenos) - stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes) - Bug Fixes - Fix -P not ending with a new line (Nick Craig-Wood) - config: don’t create default config dir when user supplies - –config (albertony) - - Don’t print non-ASCII characters with –progress on windows (Nick - Craig-Wood) + --config (albertony) + - Don’t print non-ASCII characters with --progress on windows + (Nick Craig-Wood) - Correct logs for excluded items (ssaqua) - Mount - Remove EXPERIMENTAL tags (Nick Craig-Wood) @@ -17264,7 +18667,7 @@ v1.44 - 2018-10-15 - Alias - Fix handling of Windows network paths (Nick Craig-Wood) - Azure Blob - - Add –azureblob-list-chunk parameter (Santiago Rodríguez) + - Add --azureblob-list-chunk parameter (Santiago Rodríguez) - Implemented settier command support on azureblob remote. (sandeepkru) - Work around SDK bug which causes errors for chunk-sized files @@ -17272,7 +18675,7 @@ v1.44 - 2018-10-15 - Box - Implement link sharing. (Sebastian Bünger) - Drive - - Add –drive-import-formats - google docs can now be imported + - Add --drive-import-formats - google docs can now be imported (Fabian Möller) - Rewrite mime type and extension handling (Fabian Möller) - Add document links (Fabian Möller) @@ -17280,7 +18683,7 @@ v1.44 - 2018-10-15 Möller) - Add support for apps-script to json export (Fabian Möller) - Fix escaped chars in documents during list (Fabian Möller) - - Add –drive-v2-download-min-size a workaround for slow downloads + - Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller) - Improve directory notifications in ChangeNotify (Fabian Möller) - When listing team drives in config, continue on failure (Nick @@ -17292,8 +18695,8 @@ v1.44 - 2018-10-15 - Fix service_account_file being ignored (Fabian Möller) - Jottacloud - Minor improvement in quota info (omit if unlimited) (albertony) - - Add –fast-list support (albertony) - - Add permanent delete support: –jottacloud-hard-delete + - Add --fast-list support (albertony) + - Add permanent delete support: --jottacloud-hard-delete (albertony) - Add link sharing support (albertony) - Fix handling of reserved characters. (Sebastian Bünger) @@ -17314,7 +18717,7 @@ v1.44 - 2018-10-15 Miskell) - Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout) - - Make –s3-v2-auth flag (Nick Craig-Wood) + - Make --s3-v2-auth flag (Nick Craig-Wood) - Fix v2 auth on files with spaces (Nick Craig-Wood) - Union - Implement union backend which reads from multiple backends @@ -17322,7 +18725,7 @@ v1.44 - 2018-10-15 - Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood) - Fix ChangeNotify to support multiple remotes (Fabian Möller) - - Fix –backup-dir on union backend (Nick Craig-Wood) + - Fix --backup-dir on union backend (Nick Craig-Wood) - WebDAV - Add another time format (Nick Craig-Wood) - Add a small pause after failed upload before deleting file (Nick @@ -17339,7 +18742,7 @@ Point release to fix hubic and azureblob backends. - Bug Fixes - ncdu: Return error instead of log.Fatal in Show (Fabian Möller) - - cmd: Fix crash with –progress and –stats 0 (Nick Craig-Wood) + - cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood) - docs: Tidy website display (Anagh Kumar Baranwal) - Azure Blob: - Fix multi-part uploads. (sandeepkru) @@ -18994,28 +20397,44 @@ v0.00 - 2012-11-18 - Project started -Bugs and Limitations -Empty directories are left behind / not created +BUGS AND LIMITATIONS -With remotes that have a concept of directory, eg Local and Drive, empty -directories may be left behind, or not created when one was expected. -This is because rclone doesn’t have a concept of a directory - it only -works on objects. Most of the object storage systems can’t actually -store a directory so there is nowhere for rclone to store anything about -directories. - -You can work round this to some extent with thepurge command which will -delete everything under the path, INLUDING empty directories. - -This may be fixed at some point in Issue #100 +Limitations Directory timestamps aren’t preserved -For the same reason as the above, rclone doesn’t have a concept of a -directory - it only works on objects, therefore it can’t preserve the -timestamps of directories. +Rclone doesn’t currently preserve the timestamps of directories. This is +because rclone only really considers objects when syncing. + +Rclone struggles with millions of files in a directory + +Currently rclone loads each directory entirely into memory before using +it. Since each Rclone object takes 0.5k-1k of memory this can take a +very long time and use an extremely large amount of memory. + +Millions of files in a directory tend caused by software writing cloud +storage (eg S3 buckets). + +Bucket based remotes and folders + +Bucket based remotes (eg S3/GCS/Swift/B2) do not have a concept of +directories. Rclone therefore cannot create directories in them which +means that empty directories on a bucket based remote will tend to +disappear. + +Some software creates empty keys ending in / as directory markers. +Rclone doesn’t do this as it potentially creates more objects and costs +more. It may do in future (probably with a flag). + + +Bugs + +Bugs are stored in rclone’s Github project: + +- Reported bugs +- Known issues Frequently Asked Questions @@ -19222,7 +20641,7 @@ License This is free software under the terms of MIT the license (check the COPYING file included with the source code). - Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/ + Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -19506,6 +20925,26 @@ Contributors - forgems forgems@gmail.com - Florian Apolloner florian@apolloner.eu - Aleksandar Jankovic office@ajankovic.com +- Maran maran@protonmail.com +- nguyenhuuluan434 nguyenhuuluan434@gmail.com +- Laura Hausmann zotan@zotan.pw laura@hausmann.dev +- yparitcher y@paritcher.com +- AbelThar abela.tharen@gmail.com +- Matti Niemenmaa matti.niemenmaa+git@iki.fi +- Russell Davis russelldavis@users.noreply.github.com +- Yi FU yi.fu@tink.se +- Paul Millar paul.millar@desy.de +- justinalin justinalin@qnap.com +- EliEron subanimehd@gmail.com +- justina777 chiahuei.lin@gmail.com +- Chaitanya Bankanhal bchaitanya15@gmail.com +- Michał Matczuk michal@scylladb.com +- Macavirus macavirus@zoho.com +- Abhinav Sharma abhi18av@users.noreply.github.com +- ginvine 34869051+ginvine@users.noreply.github.com +- Patrick Wang mail6543210@yahoo.com.tw +- Cenk Alti cenkalti@gmail.com +- Andreas Chlupka andy@chlupka.com @@ -19538,4 +20977,5 @@ You can also follow me on twitter for rclone announcements: Email Or if all else fails or you want to ask something private or -confidential email Nick Craig-Wood +confidential email Nick Craig-Wood. Please don’t email me requests for +help - those are better directed to the forum - thanks! diff --git a/bin/make_manual.py b/bin/make_manual.py index 9e1120757..3255cd4d0 100755 --- a/bin/make_manual.py +++ b/bin/make_manual.py @@ -48,7 +48,7 @@ docs = [ "qingstor.md", "swift.md", "pcloud.md", - "premiumize.md", + "premiumizeme.md", "putio.md", "sftp.md", "union.md", diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index ee9df1ab9..7dd97a1f8 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -145,7 +145,7 @@ Here are the standard options specific to azureblob (Microsoft Azure Blob Storag #### --azureblob-account -Storage Account Name (leave blank to use connection string or SAS URL) +Storage Account Name (leave blank to use SAS URL or Emulator) - Config: account - Env Var: RCLONE_AZUREBLOB_ACCOUNT @@ -154,7 +154,7 @@ Storage Account Name (leave blank to use connection string or SAS URL) #### --azureblob-key -Storage Account Key (leave blank to use connection string or SAS URL) +Storage Account Key (leave blank to use SAS URL or Emulator) - Config: key - Env Var: RCLONE_AZUREBLOB_KEY @@ -164,13 +164,22 @@ Storage Account Key (leave blank to use connection string or SAS URL) #### --azureblob-sas-url SAS URL for container level access only -(leave blank if using account/key or connection string) +(leave blank if using account/key or Emulator) - Config: sas_url - Env Var: RCLONE_AZUREBLOB_SAS_URL - Type: string - Default: "" +#### --azureblob-use-emulator + +Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) + +- Config: use_emulator +- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR +- Type: bool +- Default: false + ### Advanced Options Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). diff --git a/docs/content/b2.md b/docs/content/b2.md index a37f729c3..1fffd4557 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -425,6 +425,7 @@ Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. +This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. - Config: download_url @@ -432,5 +433,17 @@ Leave blank if you want to use the endpoint provided by Backblaze. - Type: string - Default: "" +#### --b2-download-auth-duration + +Time before the authorization token will expire in s or suffix ms|s|m|h|d. + +The duration before the download authorization token will expire. +The minimum value is 1 second. The maximum value is one week. + +- Config: download_auth_duration +- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION +- Type: Duration +- Default: 1w + diff --git a/docs/content/changelog.md b/docs/content/changelog.md index 38ecd68f6..248948757 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -1,11 +1,135 @@ --- title: "Documentation" description: "Rclone Changelog" -date: "2019-06-15" +date: "2019-08-26" --- # Changelog +## v1.49.0 - 2019-08-26 + +* New backends + * [1fichier](/fichier/) (Laura Hausmann) + * [Google Photos](/googlephotos) (Nick Craig-Wood) + * [Putio](/putio/) (Cenk Alti) + * [premiumize.me](/premiumizeme/) (Nick Craig-Wood) +* New Features + * Experimental [web GUI](/gui/) (Chaitanya Bankanhal) + * Implement `--compare-dest` & `--copy-dest` (yparitcher) + * Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher) + * `config reconnect` to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood) + * `config userinfo` to discover which user you are logged in as. (Nick Craig-Wood) + * `config disconnect` to disconnect you (log out) from the backend. (Nick Craig-Wood) + * Add `--use-json-log` for JSON logging (justinalin) + * Add context propagation to rclone (Aleksandar Jankovic) + * Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic) + * Add Higher units for ETA (AbelThar) + * Update rclone logos to new design (Andreas Chlupka) + * hash: Add CRC-32 support (Cenk Alti) + * help showbackend: Fixed advanced option category when there are no standard options (buengese) + * ncdu: Display/Copy to Clipboard Current Path (Gary Kim) + * operations: + * Run hashing operations in parallel (Nick Craig-Wood) + * Don't calculate checksums when using `--ignore-checksum` (Nick Craig-Wood) + * Check transfer hashes when using `--size-only` mode (Nick Craig-Wood) + * Disable multi thread copy for local to local copies (Nick Craig-Wood) + * Debug successful hashes as well as failures (Nick Craig-Wood) + * rc + * Add ability to stop async jobs (Aleksandar Jankovic) + * Return current settings if core/bwlimit called without parameters (Nick Craig-Wood) + * Rclone-WebUI integration with rclone (Chaitanya Bankanhal) + * Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement) (Chaitanya Bankanhal) + * Add anchor tags to the docs so links are consistent (Nick Craig-Wood) + * Remove _async key from input parameters after parsing so later operations won't get confused (buengese) + * Add call to clear stats (Aleksandar Jankovic) + * rcd + * Auto-login for web-gui (Chaitanya Bankanhal) + * Implement `--baseurl` for rcd and web-gui (Chaitanya Bankanhal) + * serve dlna + * Only select interfaces which can multicast for SSDP (Nick Craig-Wood) + * Add more builtin mime types to cover standard audio/video (Nick Craig-Wood) + * Fix missing mime types on Android causing missing videos (Nick Craig-Wood) + * serve ftp + * Refactor to bring into line with other serve commands (Nick Craig-Wood) + * Implement `--auth-proxy` (Nick Craig-Wood) + * serve http: Implement `--baseurl` (Nick Craig-Wood) + * serve restic: Implement `--baseurl` (Nick Craig-Wood) + * serve sftp + * Implement auth proxy (Nick Craig-Wood) + * Fix detection of whether server is authorized (Nick Craig-Wood) + * serve webdav + * Implement `--baseurl` (Nick Craig-Wood) + * Support `--auth-proxy` (Nick Craig-Wood) +* Bug Fixes + * Make "bad record MAC" a retriable error (Nick Craig-Wood) + * copyurl: Fix copying files that return HTTP errors (Nick Craig-Wood) + * march: Fix checking sub-directories when using `--no-traverse` (buengese) + * rc + * Fix unmarshalable http.AuthFn in options and put in test for marshalability (Nick Craig-Wood) + * Move job expire flags to rc to fix initalization problem (Nick Craig-Wood) + * Fix `--loopback` with rc/list and others (Nick Craig-Wood) + * rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood) + * rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood) +* Mount + * Default `--deamon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood) + * Update docs to show mounting from root OK for bucket based (Nick Craig-Wood) + * Remove nonseekable flag from write files (Nick Craig-Wood) +* VFS + * Make write without cache more efficient (Nick Craig-Wood) + * Fix `--vfs-cache-mode minimal` and `writes` ignoring cached files (Nick Craig-Wood) +* Local + * Add `--local-case-sensitive` and `--local-case-insensitive` (Nick Craig-Wood) + * Avoid polluting page cache when uploading local files to remote backends (Michał Matczuk) + * Don't calculate any hashes by default (Nick Craig-Wood) + * Fadvise run syscall on a dedicated go routine (Michał Matczuk) +* Azure Blob + * Azure Storage Emulator support (Sandeep) + * Updated config help details to remove connection string references (Sandeep) + * Make all operations work from the root (Nick Craig-Wood) +* B2 + * Implement link sharing (yparitcher) + * Enable server side copy to copy between buckets (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) +* Drive + * Fix server side copy of big files (Nick Craig-Wood) + * Update API for teamdrive use (Nick Craig-Wood) + * Add error for purge with `--drive-trashed-only` (ginvine) +* Fichier + * Make FolderID int and adjust related code (buengese) +* Google Cloud Storage + * Reduce oauth scope requested as suggested by Google (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) +* HTTP + * Add `--http-headers` flag for setting arbitrary headers (Nick Craig-Wood) +* Jottacloud + * Use new api for retrieving internal username (buengese) + * Refactor configuration and minor cleanup (buengese) +* Koofr + * Support setting modification times on Koofr backend. (jaKa) +* Opendrive + * Refactor to use existing lib/rest facilities for uploads (Nick Craig-Wood) +* Qingstor + * Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) +* S3 + * Add INTELLIGENT_TIERING storage class (Matti Niemenmaa) + * Make all operations work from the root (Nick Craig-Wood) +* SFTP + * Add missing interface check and fix About (Nick Craig-Wood) + * Completely ignore all modtime checks if SetModTime=false (Jon Fautley) + * Support md5/sha1 with rsync.net (Nick Craig-Wood) + * Save the md5/sha1 command in use to the config file for efficiency (Nick Craig-Wood) + * Opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 (Yi FU) +* Swift + * Use FixRangeOption to fix 0 length files via the VFS (Nick Craig-Wood) + * Fix upload when using no_chunk to return the correct size (Nick Craig-Wood) + * Make all operations work from the root (Nick Craig-Wood) + * Fix segments leak during failed large file uploads. (nguyenhuuluan434) +* WebDAV + * Add `--webdav-bearer-token-command` (Nick Craig-Wood) + * Refresh token when it expires with `--webdav-bearer-token-command` (Nick Craig-Wood) + * Add docs for using bearer_token_command with oidc-agent (Paul Millar) + ## v1.48.0 - 2019-06-15 * New commands @@ -337,10 +461,10 @@ date: "2019-06-15" * Enable softfloat on MIPS arch (Scott Edlund) * Integration test framework revamped with a better report and better retries (Nick Craig-Wood) * Bug Fixes - * cmd: Make --progress update the stats correctly at the end (Nick Craig-Wood) + * cmd: Make `--progress` update the stats correctly at the end (Nick Craig-Wood) * config: Create config directory on save if it is missing (Nick Craig-Wood) * dedupe: Check for existing filename before renaming a dupe file (ssaqua) - * move: Don't create directories with --dry-run (Nick Craig-Wood) + * move: Don't create directories with `--dry-run` (Nick Craig-Wood) * operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood) * serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood) * Mount @@ -387,13 +511,13 @@ date: "2019-06-15" * Implement specialised help for flags and backends (Nick Craig-Wood) * Show URL of backend help page when starting config (Nick Craig-Wood) * stats: Long names now split in center (Joanna Marek) - * Add --log-format flag for more control over log output (dcpu) + * Add `--log-format` flag for more control over log output (dcpu) * rc: Add support for OPTIONS and basic CORS (frenos) * stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes) * Bug Fixes * Fix -P not ending with a new line (Nick Craig-Wood) - * config: don't create default config dir when user supplies --config (albertony) - * Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood) + * config: don't create default config dir when user supplies `--config` (albertony) + * Don't print non-ASCII characters with `--progress` on windows (Nick Craig-Wood) * Correct logs for excluded items (ssaqua) * Mount * Remove EXPERIMENTAL tags (Nick Craig-Wood) @@ -421,19 +545,19 @@ date: "2019-06-15" * Alias * Fix handling of Windows network paths (Nick Craig-Wood) * Azure Blob - * Add --azureblob-list-chunk parameter (Santiago Rodríguez) + * Add `--azureblob-list-chunk` parameter (Santiago Rodríguez) * Implemented settier command support on azureblob remote. (sandeepkru) * Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood) * Box * Implement link sharing. (Sebastian Bünger) * Drive - * Add --drive-import-formats - google docs can now be imported (Fabian Möller) + * Add `--drive-import-formats` - google docs can now be imported (Fabian Möller) * Rewrite mime type and extension handling (Fabian Möller) * Add document links (Fabian Möller) * Add support for multipart document extensions (Fabian Möller) * Add support for apps-script to json export (Fabian Möller) * Fix escaped chars in documents during list (Fabian Möller) - * Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller) + * Add `--drive-v2-download-min-size` a workaround for slow downloads (Fabian Möller) * Improve directory notifications in ChangeNotify (Fabian Möller) * When listing team drives in config, continue on failure (Nick Craig-Wood) * FTP @@ -442,8 +566,8 @@ date: "2019-06-15" * Fix service_account_file being ignored (Fabian Möller) * Jottacloud * Minor improvement in quota info (omit if unlimited) (albertony) - * Add --fast-list support (albertony) - * Add permanent delete support: --jottacloud-hard-delete (albertony) + * Add `--fast-list` support (albertony) + * Add permanent delete support: `--jottacloud-hard-delete` (albertony) * Add link sharing support (albertony) * Fix handling of reserved characters. (Sebastian Bünger) * Fix socket leak on Object.Remove (Nick Craig-Wood) @@ -459,13 +583,13 @@ date: "2019-06-15" * S3 * Use custom pacer, to retry operations when reasonable (Craig Miskell) * Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout) - * Make --s3-v2-auth flag (Nick Craig-Wood) + * Make `--s3-v2-auth` flag (Nick Craig-Wood) * Fix v2 auth on files with spaces (Nick Craig-Wood) * Union * Implement union backend which reads from multiple backends (Felix Brucker) * Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood) * Fix ChangeNotify to support multiple remotes (Fabian Möller) - * Fix --backup-dir on union backend (Nick Craig-Wood) + * Fix `--backup-dir` on union backend (Nick Craig-Wood) * WebDAV * Add another time format (Nick Craig-Wood) * Add a small pause after failed upload before deleting file (Nick Craig-Wood) @@ -480,7 +604,7 @@ Point release to fix hubic and azureblob backends. * Bug Fixes * ncdu: Return error instead of log.Fatal in Show (Fabian Möller) - * cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood) + * cmd: Fix crash with `--progress` and `--stats 0` (Nick Craig-Wood) * docs: Tidy website display (Anagh Kumar Baranwal) * Azure Blob: * Fix multi-part uploads. (sandeepkru) diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index 0b5a182e2..c02a0feec 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone" slug: rclone url: /commands/rclone/ diff --git a/docs/content/commands/rclone_about.md b/docs/content/commands/rclone_about.md index 00a41045f..3eadee23d 100644 --- a/docs/content/commands/rclone_about.md +++ b/docs/content/commands/rclone_about.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone about" slug: rclone_about url: /commands/rclone_about/ diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md index d9d122090..a0f6e18b1 100644 --- a/docs/content/commands/rclone_authorize.md +++ b/docs/content/commands/rclone_authorize.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone authorize" slug: rclone_authorize url: /commands/rclone_authorize/ diff --git a/docs/content/commands/rclone_cachestats.md b/docs/content/commands/rclone_cachestats.md index 4139908f6..e8da13137 100644 --- a/docs/content/commands/rclone_cachestats.md +++ b/docs/content/commands/rclone_cachestats.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone cachestats" slug: rclone_cachestats url: /commands/rclone_cachestats/ diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md index bcb54e215..5d555602a 100644 --- a/docs/content/commands/rclone_cat.md +++ b/docs/content/commands/rclone_cat.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone cat" slug: rclone_cat url: /commands/rclone_cat/ diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md index 2537db5d8..f6792ce41 100644 --- a/docs/content/commands/rclone_check.md +++ b/docs/content/commands/rclone_check.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone check" slug: rclone_check url: /commands/rclone_check/ diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md index 2c84c96ec..c8d76bc26 100644 --- a/docs/content/commands/rclone_cleanup.md +++ b/docs/content/commands/rclone_cleanup.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone cleanup" slug: rclone_cleanup url: /commands/rclone_cleanup/ diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md index 42b9f5dc4..485746e8e 100644 --- a/docs/content/commands/rclone_config.md +++ b/docs/content/commands/rclone_config.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config" slug: rclone_config url: /commands/rclone_config/ @@ -32,11 +32,14 @@ See the [global flags page](/flags/) for global options not listed here. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote . +* [rclone config disconnect](/commands/rclone_config_disconnect/) - Disconnects user from remote * [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON. * [rclone config edit](/commands/rclone_config_edit/) - Enter an interactive configuration session. * [rclone config file](/commands/rclone_config_file/) - Show path of configuration file in use. * [rclone config password](/commands/rclone_config_password/) - Update password in an existing remote. * [rclone config providers](/commands/rclone_config_providers/) - List in JSON format all the providers and options. +* [rclone config reconnect](/commands/rclone_config_reconnect/) - Re-authenticates user with remote. * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. +* [rclone config userinfo](/commands/rclone_config_userinfo/) - Prints info about logged in user of remote. diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md index ce56d0a2b..23517655c 100644 --- a/docs/content/commands/rclone_config_create.md +++ b/docs/content/commands/rclone_config_create.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config create" slug: rclone_config_create url: /commands/rclone_config_create/ diff --git a/docs/content/commands/rclone_config_delete.md b/docs/content/commands/rclone_config_delete.md index 141cddd52..4955c7a54 100644 --- a/docs/content/commands/rclone_config_delete.md +++ b/docs/content/commands/rclone_config_delete.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config delete" slug: rclone_config_delete url: /commands/rclone_config_delete/ diff --git a/docs/content/commands/rclone_config_disconnect.md b/docs/content/commands/rclone_config_disconnect.md new file mode 100644 index 000000000..6a7d424ac --- /dev/null +++ b/docs/content/commands/rclone_config_disconnect.md @@ -0,0 +1,36 @@ +--- +date: 2019-08-26T15:19:45+01:00 +title: "rclone config disconnect" +slug: rclone_config_disconnect +url: /commands/rclone_config_disconnect/ +--- +## rclone config disconnect + +Disconnects user from remote + +### Synopsis + + +This disconnects the remote: passed in to the cloud storage system. + +This normally means revoking the oauth token. + +To reconnect use "rclone config reconnect". + + +``` +rclone config disconnect remote: [flags] +``` + +### Options + +``` + -h, --help help for disconnect +``` + +See the [global flags page](/flags/) for global options not listed here. + +### SEE ALSO + +* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + diff --git a/docs/content/commands/rclone_config_dump.md b/docs/content/commands/rclone_config_dump.md index 7d8d239e7..e22214a38 100644 --- a/docs/content/commands/rclone_config_dump.md +++ b/docs/content/commands/rclone_config_dump.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config dump" slug: rclone_config_dump url: /commands/rclone_config_dump/ diff --git a/docs/content/commands/rclone_config_edit.md b/docs/content/commands/rclone_config_edit.md index 1b45cdf72..44c5210ec 100644 --- a/docs/content/commands/rclone_config_edit.md +++ b/docs/content/commands/rclone_config_edit.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config edit" slug: rclone_config_edit url: /commands/rclone_config_edit/ diff --git a/docs/content/commands/rclone_config_file.md b/docs/content/commands/rclone_config_file.md index 9f1df9ef9..1b53f5903 100644 --- a/docs/content/commands/rclone_config_file.md +++ b/docs/content/commands/rclone_config_file.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config file" slug: rclone_config_file url: /commands/rclone_config_file/ diff --git a/docs/content/commands/rclone_config_password.md b/docs/content/commands/rclone_config_password.md index 333dd4211..ee3496439 100644 --- a/docs/content/commands/rclone_config_password.md +++ b/docs/content/commands/rclone_config_password.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config password" slug: rclone_config_password url: /commands/rclone_config_password/ diff --git a/docs/content/commands/rclone_config_providers.md b/docs/content/commands/rclone_config_providers.md index c7dec02f0..7a59a70e7 100644 --- a/docs/content/commands/rclone_config_providers.md +++ b/docs/content/commands/rclone_config_providers.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config providers" slug: rclone_config_providers url: /commands/rclone_config_providers/ diff --git a/docs/content/commands/rclone_config_reconnect.md b/docs/content/commands/rclone_config_reconnect.md new file mode 100644 index 000000000..c0b7a043f --- /dev/null +++ b/docs/content/commands/rclone_config_reconnect.md @@ -0,0 +1,36 @@ +--- +date: 2019-08-26T15:19:45+01:00 +title: "rclone config reconnect" +slug: rclone_config_reconnect +url: /commands/rclone_config_reconnect/ +--- +## rclone config reconnect + +Re-authenticates user with remote. + +### Synopsis + + +This reconnects remote: passed in to the cloud storage system. + +To disconnect the remote use "rclone config disconnect". + +This normally means going through the interactive oauth flow again. + + +``` +rclone config reconnect remote: [flags] +``` + +### Options + +``` + -h, --help help for reconnect +``` + +See the [global flags page](/flags/) for global options not listed here. + +### SEE ALSO + +* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + diff --git a/docs/content/commands/rclone_config_show.md b/docs/content/commands/rclone_config_show.md index e28686463..1d829d6f0 100644 --- a/docs/content/commands/rclone_config_show.md +++ b/docs/content/commands/rclone_config_show.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config show" slug: rclone_config_show url: /commands/rclone_config_show/ diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md index 98a812caa..872901f24 100644 --- a/docs/content/commands/rclone_config_update.md +++ b/docs/content/commands/rclone_config_update.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone config update" slug: rclone_config_update url: /commands/rclone_config_update/ diff --git a/docs/content/commands/rclone_config_userinfo.md b/docs/content/commands/rclone_config_userinfo.md new file mode 100644 index 000000000..7ec95532f --- /dev/null +++ b/docs/content/commands/rclone_config_userinfo.md @@ -0,0 +1,34 @@ +--- +date: 2019-08-26T15:19:45+01:00 +title: "rclone config userinfo" +slug: rclone_config_userinfo +url: /commands/rclone_config_userinfo/ +--- +## rclone config userinfo + +Prints info about logged in user of remote. + +### Synopsis + + +This prints the details of the person logged in to the cloud storage +system. + + +``` +rclone config userinfo remote: [flags] +``` + +### Options + +``` + -h, --help help for userinfo + --json Format output as JSON +``` + +See the [global flags page](/flags/) for global options not listed here. + +### SEE ALSO + +* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index 97c363c05..7b305dbb0 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone copy" slug: rclone_copy url: /commands/rclone_copy/ diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index 1cb6ac66e..820a7ff10 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone copyto" slug: rclone_copyto url: /commands/rclone_copyto/ diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md index e30929c09..5f96c95c4 100644 --- a/docs/content/commands/rclone_copyurl.md +++ b/docs/content/commands/rclone_copyurl.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone copyurl" slug: rclone_copyurl url: /commands/rclone_copyurl/ diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md index 404cf13ef..72cd87110 100644 --- a/docs/content/commands/rclone_cryptcheck.md +++ b/docs/content/commands/rclone_cryptcheck.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone cryptcheck" slug: rclone_cryptcheck url: /commands/rclone_cryptcheck/ diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md index 9725d18b3..8c0e4c934 100644 --- a/docs/content/commands/rclone_cryptdecode.md +++ b/docs/content/commands/rclone_cryptdecode.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone cryptdecode" slug: rclone_cryptdecode url: /commands/rclone_cryptdecode/ diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md index cfed8946c..79917a0c6 100644 --- a/docs/content/commands/rclone_dbhashsum.md +++ b/docs/content/commands/rclone_dbhashsum.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone dbhashsum" slug: rclone_dbhashsum url: /commands/rclone_dbhashsum/ diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md index 3a36895f5..819d1cbc6 100644 --- a/docs/content/commands/rclone_dedupe.md +++ b/docs/content/commands/rclone_dedupe.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone dedupe" slug: rclone_dedupe url: /commands/rclone_dedupe/ diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md index 584c7d4cc..71d8607ce 100644 --- a/docs/content/commands/rclone_delete.md +++ b/docs/content/commands/rclone_delete.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone delete" slug: rclone_delete url: /commands/rclone_delete/ diff --git a/docs/content/commands/rclone_deletefile.md b/docs/content/commands/rclone_deletefile.md index 5897162e4..7b7552689 100644 --- a/docs/content/commands/rclone_deletefile.md +++ b/docs/content/commands/rclone_deletefile.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone deletefile" slug: rclone_deletefile url: /commands/rclone_deletefile/ diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md index a9faf8c2d..9ef697f79 100644 --- a/docs/content/commands/rclone_genautocomplete.md +++ b/docs/content/commands/rclone_genautocomplete.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone genautocomplete" slug: rclone_genautocomplete url: /commands/rclone_genautocomplete/ diff --git a/docs/content/commands/rclone_genautocomplete_bash.md b/docs/content/commands/rclone_genautocomplete_bash.md index 1245b4540..5e00bb061 100644 --- a/docs/content/commands/rclone_genautocomplete_bash.md +++ b/docs/content/commands/rclone_genautocomplete_bash.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone genautocomplete bash" slug: rclone_genautocomplete_bash url: /commands/rclone_genautocomplete_bash/ diff --git a/docs/content/commands/rclone_genautocomplete_zsh.md b/docs/content/commands/rclone_genautocomplete_zsh.md index 024fb90e8..67a5cfba0 100644 --- a/docs/content/commands/rclone_genautocomplete_zsh.md +++ b/docs/content/commands/rclone_genautocomplete_zsh.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone genautocomplete zsh" slug: rclone_genautocomplete_zsh url: /commands/rclone_genautocomplete_zsh/ diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md index b8cc4d0b2..a69018fc5 100644 --- a/docs/content/commands/rclone_gendocs.md +++ b/docs/content/commands/rclone_gendocs.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone gendocs" slug: rclone_gendocs url: /commands/rclone_gendocs/ diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md index 2462f8b6c..087e75779 100644 --- a/docs/content/commands/rclone_hashsum.md +++ b/docs/content/commands/rclone_hashsum.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone hashsum" slug: rclone_hashsum url: /commands/rclone_hashsum/ diff --git a/docs/content/commands/rclone_link.md b/docs/content/commands/rclone_link.md index be2283a27..8dc633839 100644 --- a/docs/content/commands/rclone_link.md +++ b/docs/content/commands/rclone_link.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone link" slug: rclone_link url: /commands/rclone_link/ diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index c9a9019e9..4f8ce6eb3 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone listremotes" slug: rclone_listremotes url: /commands/rclone_listremotes/ diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md index a323c5ec2..c3b83d360 100644 --- a/docs/content/commands/rclone_ls.md +++ b/docs/content/commands/rclone_ls.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone ls" slug: rclone_ls url: /commands/rclone_ls/ diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md index 3d9269424..fd7dc50f1 100644 --- a/docs/content/commands/rclone_lsd.md +++ b/docs/content/commands/rclone_lsd.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone lsd" slug: rclone_lsd url: /commands/rclone_lsd/ diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md index 1091ed1d5..93215f9b4 100644 --- a/docs/content/commands/rclone_lsf.md +++ b/docs/content/commands/rclone_lsf.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone lsf" slug: rclone_lsf url: /commands/rclone_lsf/ diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md index 631aff8b4..150ef017c 100644 --- a/docs/content/commands/rclone_lsjson.md +++ b/docs/content/commands/rclone_lsjson.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone lsjson" slug: rclone_lsjson url: /commands/rclone_lsjson/ diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md index 0914ecf8e..99842acfe 100644 --- a/docs/content/commands/rclone_lsl.md +++ b/docs/content/commands/rclone_lsl.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone lsl" slug: rclone_lsl url: /commands/rclone_lsl/ diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md index 61a157c83..219f4910e 100644 --- a/docs/content/commands/rclone_md5sum.md +++ b/docs/content/commands/rclone_md5sum.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone md5sum" slug: rclone_md5sum url: /commands/rclone_md5sum/ diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md index 9bdee3457..dd49266ef 100644 --- a/docs/content/commands/rclone_mkdir.md +++ b/docs/content/commands/rclone_mkdir.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone mkdir" slug: rclone_mkdir url: /commands/rclone_mkdir/ diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index 57fc34da3..bdc0f5324 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone mount" slug: rclone_mount url: /commands/rclone_mount/ @@ -74,10 +74,7 @@ applications won't work with their files on an rclone mount without Caching](#file-caching) section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, -Hubic) won't work from the root - you will need to specify a bucket, -or a path within the bucket. So `swift:` won't work whereas -`swift:bucket` will as will `swift:bucket/path`. -None of these support the concept of directories, so empty +Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index 83f9c264a..fa52dbb06 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone move" slug: rclone_move url: /commands/rclone_move/ diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index 69ff931bd..3793848ad 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone moveto" slug: rclone_moveto url: /commands/rclone_moveto/ diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md index 2b8fa195c..d2f927250 100644 --- a/docs/content/commands/rclone_ncdu.md +++ b/docs/content/commands/rclone_ncdu.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone ncdu" slug: rclone_ncdu url: /commands/rclone_ncdu/ @@ -31,6 +31,7 @@ Here are the keys - press '?' to toggle the help on and off g toggle graph n,s,C sort by name,size,count d delete file/directory + Y display current path ^L refresh screen ? to toggle help on and off q/ESC/c-C to quit diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md index 6cb31de19..0cd6621a6 100644 --- a/docs/content/commands/rclone_obscure.md +++ b/docs/content/commands/rclone_obscure.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone obscure" slug: rclone_obscure url: /commands/rclone_obscure/ diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md index 10c77e768..31d97647d 100644 --- a/docs/content/commands/rclone_purge.md +++ b/docs/content/commands/rclone_purge.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone purge" slug: rclone_purge url: /commands/rclone_purge/ diff --git a/docs/content/commands/rclone_rc.md b/docs/content/commands/rclone_rc.md index 37c67bfd5..a42e14e59 100644 --- a/docs/content/commands/rclone_rc.md +++ b/docs/content/commands/rclone_rc.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone rc" slug: rclone_rc url: /commands/rclone_rc/ diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md index 8873865dc..b56b02e4c 100644 --- a/docs/content/commands/rclone_rcat.md +++ b/docs/content/commands/rclone_rcat.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone rcat" slug: rclone_rcat url: /commands/rclone_rcat/ diff --git a/docs/content/commands/rclone_rcd.md b/docs/content/commands/rclone_rcd.md index 679b0687e..358d72e28 100644 --- a/docs/content/commands/rclone_rcd.md +++ b/docs/content/commands/rclone_rcd.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone rcd" slug: rclone_rcd url: /commands/rclone_rcd/ diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md index 594c57548..c63bc110f 100644 --- a/docs/content/commands/rclone_rmdir.md +++ b/docs/content/commands/rclone_rmdir.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone rmdir" slug: rclone_rmdir url: /commands/rclone_rmdir/ diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index 7f87e23d2..895bf3770 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone rmdirs" slug: rclone_rmdirs url: /commands/rclone_rmdirs/ diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md index 845cce00d..61284beee 100644 --- a/docs/content/commands/rclone_serve.md +++ b/docs/content/commands/rclone_serve.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone serve" slug: rclone_serve url: /commands/rclone_serve/ diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md index 91ee77cf7..da866a92e 100644 --- a/docs/content/commands/rclone_serve_dlna.md +++ b/docs/content/commands/rclone_serve_dlna.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone serve dlna" slug: rclone_serve_dlna url: /commands/rclone_serve_dlna/ diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md index a7b55e6bf..027b2dc9b 100644 --- a/docs/content/commands/rclone_serve_ftp.md +++ b/docs/content/commands/rclone_serve_ftp.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone serve ftp" slug: rclone_serve_ftp url: /commands/rclone_serve_ftp/ @@ -165,6 +165,72 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to --low-level-retries times. +### Auth Proxy + +If you supply the parameter `--auth-proxy /path/to/program` then +rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. This uses a simple +JSON based protocl with input on STDIN and output on STDOUT. + +There is an example program +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. + +The program's job is to take a `user` and `pass` on the input and turn +those into the config for a backend on STDOUT in JSON format. This +config will have any default parameters for the backend added, but it +won't use configuration from environment variables or command line +options - it is the job of the proxy program to make a complete +config. + +This config generated must have this extra parameter +- `_root` - root to use for the backend + +And it may have this parameter +- `_obscure` - comma separated strings for parameters to obscure + +For example the program might take this on STDIN + +``` +{ + "user": "me", + "pass": "mypassword" +} +``` + +And return this on STDOUT + +``` +{ + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" +} +``` + +This would mean that an SFTP backend would be created on the fly for +the `user` and `pass` returned in the output to the host given. Note +that since `_obscure` is set to `pass`, rclone will obscure the `pass` +parameter before creating the backend (which is required for sftp +backends). + +The progam can manipulate the supplied `user` in any way, for example +to make proxy to many different sftp backends, you could make the +`user` be `user@example.com` and then set the `host` to `example.com` +in the output and the user to `user`. For security you'd probably want +to restrict the `host` to a limited list. + +Note that an internal cache is keyed on `user` so only use that for +configuration, don't use `pass`. This also means that if a user's +password is changed the cache will need to expire (which takes 5 mins) +before it takes effect. + +This can be used to build general purpose proxies to any kind of +backend that rclone supports. + ``` rclone serve ftp remote:path [flags] @@ -174,6 +240,7 @@ rclone serve ftp remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121") + --auth-proxy string A program to use to create the backend from the auth. --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) --file-perms FileMode File permissions (default 0666) diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md index e56b642c7..4d75268bf 100644 --- a/docs/content/commands/rclone_serve_http.md +++ b/docs/content/commands/rclone_serve_http.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone serve http" slug: rclone_serve_http url: /commands/rclone_serve_http/ @@ -39,6 +39,14 @@ for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +--baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used --baseurl "/rclone" then +rclone would serve from a URL starting with "/rclone/". This is +useful if you wish to proxy rclone serve. Rclone automatically +inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. + #### Authentication By default this will serve files without needing a login. @@ -215,6 +223,7 @@ rclone serve http remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md index 87b379c44..dc54635c5 100644 --- a/docs/content/commands/rclone_serve_restic.md +++ b/docs/content/commands/rclone_serve_restic.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone serve restic" slug: rclone_serve_restic url: /commands/rclone_serve_restic/ @@ -105,6 +105,14 @@ for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +--baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used --baseurl "/rclone" then +rclone would serve from a URL starting with "/rclone/". This is +useful if you wish to proxy rclone serve. Rclone automatically +inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. + #### Authentication By default this will serve files without needing a login. @@ -148,6 +156,7 @@ rclone serve restic remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") --append-only disallow deletion of repository data + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with -h, --help help for restic diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md index 4b3a98c60..191c8338d 100644 --- a/docs/content/commands/rclone_serve_sftp.md +++ b/docs/content/commands/rclone_serve_sftp.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone serve sftp" slug: rclone_serve_sftp url: /commands/rclone_serve_sftp/ @@ -176,6 +176,72 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to --low-level-retries times. +### Auth Proxy + +If you supply the parameter `--auth-proxy /path/to/program` then +rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. This uses a simple +JSON based protocl with input on STDIN and output on STDOUT. + +There is an example program +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. + +The program's job is to take a `user` and `pass` on the input and turn +those into the config for a backend on STDOUT in JSON format. This +config will have any default parameters for the backend added, but it +won't use configuration from environment variables or command line +options - it is the job of the proxy program to make a complete +config. + +This config generated must have this extra parameter +- `_root` - root to use for the backend + +And it may have this parameter +- `_obscure` - comma separated strings for parameters to obscure + +For example the program might take this on STDIN + +``` +{ + "user": "me", + "pass": "mypassword" +} +``` + +And return this on STDOUT + +``` +{ + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" +} +``` + +This would mean that an SFTP backend would be created on the fly for +the `user` and `pass` returned in the output to the host given. Note +that since `_obscure` is set to `pass`, rclone will obscure the `pass` +parameter before creating the backend (which is required for sftp +backends). + +The progam can manipulate the supplied `user` in any way, for example +to make proxy to many different sftp backends, you could make the +`user` be `user@example.com` and then set the `host` to `example.com` +in the output and the user to `user`. For security you'd probably want +to restrict the `host` to a limited list. + +Note that an internal cache is keyed on `user` so only use that for +configuration, don't use `pass`. This also means that if a user's +password is changed the cache will need to expire (which takes 5 mins) +before it takes effect. + +This can be used to build general purpose proxies to any kind of +backend that rclone supports. + ``` rclone serve sftp remote:path [flags] @@ -185,6 +251,7 @@ rclone serve sftp remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022") + --auth-proxy string A program to use to create the backend from the auth. --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys") --dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-perms FileMode Directory permissions (default 0777) diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md index ba6c1fbd1..cd0750b54 100644 --- a/docs/content/commands/rclone_serve_webdav.md +++ b/docs/content/commands/rclone_serve_webdav.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone serve webdav" slug: rclone_serve_webdav url: /commands/rclone_serve_webdav/ @@ -47,6 +47,14 @@ for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. +--baseurl controls the URL prefix that rclone serves from. By default +rclone will serve from the root. If you used --baseurl "/rclone" then +rclone would serve from a URL starting with "/rclone/". This is +useful if you wish to proxy rclone serve. Rclone automatically +inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", +--baseurl "/rclone" and --baseurl "/rclone/" are all treated +identically. + #### Authentication By default this will serve files without needing a login. @@ -214,6 +222,72 @@ This mode should support all normal file system operations. If an upload or download fails it will be retried up to --low-level-retries times. +### Auth Proxy + +If you supply the parameter `--auth-proxy /path/to/program` then +rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. This uses a simple +JSON based protocl with input on STDIN and output on STDOUT. + +There is an example program +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. + +The program's job is to take a `user` and `pass` on the input and turn +those into the config for a backend on STDOUT in JSON format. This +config will have any default parameters for the backend added, but it +won't use configuration from environment variables or command line +options - it is the job of the proxy program to make a complete +config. + +This config generated must have this extra parameter +- `_root` - root to use for the backend + +And it may have this parameter +- `_obscure` - comma separated strings for parameters to obscure + +For example the program might take this on STDIN + +``` +{ + "user": "me", + "pass": "mypassword" +} +``` + +And return this on STDOUT + +``` +{ + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" +} +``` + +This would mean that an SFTP backend would be created on the fly for +the `user` and `pass` returned in the output to the host given. Note +that since `_obscure` is set to `pass`, rclone will obscure the `pass` +parameter before creating the backend (which is required for sftp +backends). + +The progam can manipulate the supplied `user` in any way, for example +to make proxy to many different sftp backends, you could make the +`user` be `user@example.com` and then set the `host` to `example.com` +in the output and the user to `user`. For security you'd probably want +to restrict the `host` to a limited list. + +Note that an internal cache is keyed on `user` so only use that for +configuration, don't use `pass`. This also means that if a user's +password is changed the cache will need to expire (which takes 5 mins) +before it takes effect. + +This can be used to build general purpose proxies to any kind of +backend that rclone supports. + ``` rclone serve webdav remote:path [flags] @@ -223,6 +297,8 @@ rclone serve webdav remote:path [flags] ``` --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080") + --auth-proxy string A program to use to create the backend from the auth. + --baseurl string Prefix for URLs - leave blank for root. --cert string SSL PEM key (concatenation of certificate and CA certificate) --client-ca string Client certificate authority to verify clients with --dir-cache-time duration Time to cache directory entries for. (default 5m0s) diff --git a/docs/content/commands/rclone_settier.md b/docs/content/commands/rclone_settier.md index 02177cb71..b43ab5546 100644 --- a/docs/content/commands/rclone_settier.md +++ b/docs/content/commands/rclone_settier.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone settier" slug: rclone_settier url: /commands/rclone_settier/ diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md index 435ec6e13..2fd097475 100644 --- a/docs/content/commands/rclone_sha1sum.md +++ b/docs/content/commands/rclone_sha1sum.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone sha1sum" slug: rclone_sha1sum url: /commands/rclone_sha1sum/ diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md index 99a94a1ea..a5ffb10c0 100644 --- a/docs/content/commands/rclone_size.md +++ b/docs/content/commands/rclone_size.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone size" slug: rclone_size url: /commands/rclone_size/ diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index 533173047..cd2d5d1e1 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone sync" slug: rclone_sync url: /commands/rclone_sync/ diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md index 58ee616f7..3ebf60ac0 100644 --- a/docs/content/commands/rclone_touch.md +++ b/docs/content/commands/rclone_touch.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone touch" slug: rclone_touch url: /commands/rclone_touch/ diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md index e667f480e..34d4357cb 100644 --- a/docs/content/commands/rclone_tree.md +++ b/docs/content/commands/rclone_tree.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone tree" slug: rclone_tree url: /commands/rclone_tree/ diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md index ca2bee0e7..64886a042 100644 --- a/docs/content/commands/rclone_version.md +++ b/docs/content/commands/rclone_version.md @@ -1,5 +1,5 @@ --- -date: 2019-06-20T16:09:42+01:00 +date: 2019-08-26T15:19:45+01:00 title: "rclone version" slug: rclone_version url: /commands/rclone_version/ diff --git a/docs/content/flags.md b/docs/content/flags.md index e3317a8fd..73bdfb58c 100755 --- a/docs/content/flags.md +++ b/docs/content/flags.md @@ -1,12 +1,13 @@ --- title: "Global Flags" description: "Rclone Global Flags" -date: "2019-06-20T16:09:42+01:00" +date: "2019-08-26T15:19:45+01:00" --- # Global Flags -This describes the global flags available to every rclone command. +This describes the global flags available to every rclone command +split into two groups, non backend and backend flags. ## Non Backend Flags @@ -25,8 +26,10 @@ These flags are available for every command. -c, --checksum Skip based on checksum (if available) & size, not mod-time & size --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth + --compare-dest string use DIR to server side copy flies from. --config string Config file. (default "$HOME/.config/rclone/rclone.conf") --contimeout duration Connect timeout (default 1m0s) + --copy-dest string Compare dest to DIR also. --cpuprofile string Write cpu profile to file --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring @@ -63,6 +66,7 @@ These flags are available for every command. --max-delete int When synchronizing, limit the number of deletes (default -1) --max-depth int If set limits the recursion depth to this. (default -1) --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off) + --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer. (default off) --memprofile string Write memory profile to file --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) @@ -78,6 +82,8 @@ These flags are available for every command. -q, --quiet Print as little stuff as possible --rc Enable the remote control server. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") + --rc-allow-origin string Set the allowed origin for CORS. + --rc-baseurl string Prefix for URLs - leave blank for root. --rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --rc-client-ca string Client certificate authority to verify clients with --rc-files string Path to local files to serve on the HTTP server. @@ -93,6 +99,9 @@ These flags are available for every command. --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-user string User name for authentication. + --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") + --rc-web-gui Launch WebGUI on localhost + --rc-web-gui-update Update / Force update to latest version of web gui --retries int Retry operations this many times if they fail (default 3) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --size-only Skip based on size only, not mod-time or checksum @@ -104,7 +113,7 @@ These flags are available for every command. --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) - --suffix string Suffix for use with --backup-dir. + --suffix string Suffix to add to changed files. --suffix-keep-extension Preserve the extension when using --suffix. --syslog Use Syslog for logging --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") @@ -115,9 +124,10 @@ These flags are available for every command. --transfers int Number of file transfers to run in parallel. (default 4) -u, --update Skip files that are newer on the destination. --use-cookies Enable session cookiejar. + --use-json-log Use json log format. --use-mmap Use mmap allocator (see docs). --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.48.0-012-g2192f468-gphotos-beta") + --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0") -v, --verbose count Print lots more stuff (repeat for more) ``` @@ -135,16 +145,18 @@ and may be set in the config file. --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --alias-remote string Remote or path to alias. --azureblob-access-tier string Access tier of blob: hot, cool or archive. - --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) + --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-endpoint string Endpoint for the service - --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) + --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator) --azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-sas-url string SAS URL for container level access only --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) + --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-disable-checksum Disable checksums for large (> upload cutoff) files + --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w) --b2-download-url string Custom endpoint for downloads. --b2-endpoint string Endpoint for the service. --b2-hard-delete Permanently delete files on remote removal, otherwise hide files. @@ -217,6 +229,8 @@ and may be set in the config file. --dropbox-client-id string Dropbox App Client Id --dropbox-client-secret string Dropbox App Client Secret --dropbox-impersonate string Impersonate this user when using a business account. + --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl + --fichier-shared-folder string If you want to download a shared folder, add this parameter --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited --ftp-host string FTP host to connect to --ftp-no-check-certificate Do not verify the TLS certificate of the server @@ -235,7 +249,9 @@ and may be set in the config file. --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gphotos-client-id string Google Application Client Id --gphotos-client-secret string Google Application Client Secret + --gphotos-read-only Set to make the Google Photos backend read only. --gphotos-read-size Set to read the size of media items. + --http-headers CommaSepList Set HTTP headers for all transactions --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of http host to connect to --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) @@ -246,10 +262,10 @@ and may be set in the config file. --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M) - --jottacloud-user string User Name: --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net") --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used. --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) + --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true) --koofr-user string Your Koofr user name -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive @@ -307,11 +323,13 @@ and may be set in the config file. --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. --sftp-key-use-agent When set forces the usage of the ssh-agent. + --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect. --sftp-pass string SSH password, leave blank to use ssh-agent. --sftp-path-override string Override path used by SSH connection. --sftp-port string SSH port, leave blank to use default (22) --sftp-set-modtime Set the modified time on the remote if set. (default true) - --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. + --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect. + --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. --sftp-user string SSH username, leave blank for current username, ncw --skip-links Don't warn about skipped symlinks. --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) @@ -336,6 +354,7 @@ and may be set in the config file. --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --union-remotes string List of space separated remotes. --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-bearer-token-command string Command to run to get a bearer token --webdav-pass string Password. --webdav-url string URL of http host to connect to --webdav-user string User name diff --git a/docs/content/http.md b/docs/content/http.md index 6ef5ab473..5d212a82b 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -123,6 +123,25 @@ URL of http host to connect to Here are the advanced options specific to http (http Connection). +#### --http-headers + +Set HTTP headers for all transactions + +Use this to set additional HTTP headers for all transactions + +The input format is comma separated list of key,value pairs. Standard +[CSV encoding](https://godoc.org/encoding/csv) may be used. + +For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. + +You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. + + +- Config: headers +- Env Var: RCLONE_HTTP_HEADERS +- Type: CommaSepList +- Default: + #### --http-no-slash Set this if the site doesn't end directories with / diff --git a/docs/content/koofr.md b/docs/content/koofr.md index cea5d354f..7e024d1da 100644 --- a/docs/content/koofr.md +++ b/docs/content/koofr.md @@ -131,6 +131,15 @@ Mount ID of the mount to use. If omitted, the primary mount is used. - Type: string - Default: "" +#### --koofr-setmtime + +Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. + +- Config: setmtime +- Env Var: RCLONE_KOOFR_SETMTIME +- Type: bool +- Default: true + ### Limitations ### diff --git a/docs/content/local.md b/docs/content/local.md index 15cef24e4..a80275331 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -318,4 +318,30 @@ Don't cross filesystem boundaries (unix/macOS only). - Type: bool - Default: false +#### --local-case-sensitive + +Force the filesystem to report itself as case sensitive. + +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. Use this flag +to override the default choice. + +- Config: case_sensitive +- Env Var: RCLONE_LOCAL_CASE_SENSITIVE +- Type: bool +- Default: false + +#### --local-case-insensitive + +Force the filesystem to report itself as case insensitive + +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. Use this flag +to override the default choice. + +- Config: case_insensitive +- Env Var: RCLONE_LOCAL_CASE_INSENSITIVE +- Type: bool +- Default: false + diff --git a/docs/content/putio.md b/docs/content/putio.md index 996ee51d6..dc357ac96 100644 --- a/docs/content/putio.md +++ b/docs/content/putio.md @@ -95,3 +95,6 @@ List all the files in your put.io To copy a local directory to a put.io directory called backup rclone copy /home/source remote:backup + + + diff --git a/docs/content/sftp.md b/docs/content/sftp.md index 125aeb03e..e7c09e477 100644 --- a/docs/content/sftp.md +++ b/docs/content/sftp.md @@ -286,6 +286,24 @@ Set the modified time on the remote if set. - Type: bool - Default: true +#### --sftp-md5sum-command + +The command used to read md5 hashes. Leave blank for autodetect. + +- Config: md5sum_command +- Env Var: RCLONE_SFTP_MD5SUM_COMMAND +- Type: string +- Default: "" + +#### --sftp-sha1sum-command + +The command used to read sha1 hashes. Leave blank for autodetect. + +- Config: sha1sum_command +- Env Var: RCLONE_SFTP_SHA1SUM_COMMAND +- Type: string +- Default: "" + ### Limitations ### diff --git a/docs/content/union.md b/docs/content/union.md index 2ea039262..49db11f9c 100644 --- a/docs/content/union.md +++ b/docs/content/union.md @@ -97,7 +97,7 @@ Copy another local directory to the union directory called source, which will be ### Standard Options -Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes). +Here are the standard options specific to union (Union merges the contents of several remotes). #### --union-remotes diff --git a/docs/content/webdav.md b/docs/content/webdav.md index f71e4be61..80dfa3ea0 100644 --- a/docs/content/webdav.md +++ b/docs/content/webdav.md @@ -167,6 +167,19 @@ Bearer token instead of user/pass (eg a Macaroon) - Type: string - Default: "" +### Advanced Options + +Here are the advanced options specific to webdav (Webdav). + +#### --webdav-bearer-token-command + +Command to run to get a bearer token + +- Config: bearer_token_command +- Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND +- Type: string +- Default: "" + ## Provider notes ## diff --git a/docs/layouts/partials/version.html b/docs/layouts/partials/version.html index 1aafeee12..f67c24e26 100644 --- a/docs/layouts/partials/version.html +++ b/docs/layouts/partials/version.html @@ -1 +1 @@ -v1.48.0 \ No newline at end of file +v1.49.0 \ No newline at end of file diff --git a/fs/version.go b/fs/version.go index c7bbf6065..5447023ab 100644 --- a/fs/version.go +++ b/fs/version.go @@ -1,4 +1,4 @@ package fs // Version of rclone -var Version = "v1.48.0-DEV" +var Version = "v1.49.0" diff --git a/rclone.1 b/rclone.1 index 3ebd4f07f..452298429 100644 --- a/rclone.1 +++ b/rclone.1 @@ -1,15 +1,15 @@ .\"t .\" Automatically generated by Pandoc 2.2.1 .\" -.TH "rclone" "1" "Jun 15, 2019" "User Manual" "" +.TH "rclone" "1" "Aug 26, 2019" "User Manual" "" .hy -.SH Rclone -.PP -[IMAGE: Logo (https://rclone.org/img/rclone-120x120.png)] (https://rclone.org/) +.SH Rclone \- rsync for cloud storage .PP Rclone is a command line program to sync files and directories to and from: .IP \[bu] 2 +1Fichier +.IP \[bu] 2 Alibaba Cloud (Aliyun) Object Storage System (OSS) .IP \[bu] 2 Amazon Drive (See note (/amazonclouddrive/#status)) @@ -22,6 +22,8 @@ Box .IP \[bu] 2 Ceph .IP \[bu] 2 +C14 +.IP \[bu] 2 DigitalOcean Spaces .IP \[bu] 2 Dreamhost @@ -34,6 +36,8 @@ Google Cloud Storage .IP \[bu] 2 Google Drive .IP \[bu] 2 +Google Photos +.IP \[bu] 2 HTTP .IP \[bu] 2 Hubic @@ -68,6 +72,8 @@ ownCloud .IP \[bu] 2 pCloud .IP \[bu] 2 +premiumize.me +.IP \[bu] 2 put.io .IP \[bu] 2 QingStor @@ -121,6 +127,8 @@ Multi\-threaded downloads to local disk Can serve (https://rclone.org/commands/rclone_serve/) local or remote files over HTTP (https://rclone.org/commands/rclone_serve_http/)/WebDav (https://rclone.org/commands/rclone_serve_webdav/)/FTP (https://rclone.org/commands/rclone_serve_ftp/)/SFTP (https://rclone.org/commands/rclone_serve_sftp/)/dlna (https://rclone.org/commands/rclone_serve_dlna/) +.IP \[bu] 2 +Experimental Web based GUI (https://rclone.org/gui/) .PP Links .IP \[bu] 2 @@ -324,6 +332,8 @@ rclone\ config .PP See the following for detailed instructions for .IP \[bu] 2 +1Fichier (https://rclone.org/fichier/) +.IP \[bu] 2 Alias (https://rclone.org/alias/) .IP \[bu] 2 Amazon Drive (https://rclone.org/amazonclouddrive/) @@ -348,6 +358,8 @@ Google Cloud Storage (https://rclone.org/googlecloudstorage/) .IP \[bu] 2 Google Drive (https://rclone.org/drive/) .IP \[bu] 2 +Google Photos (https://rclone.org/googlephotos/) +.IP \[bu] 2 HTTP (https://rclone.org/http/) .IP \[bu] 2 Hubic (https://rclone.org/hubic/) @@ -369,6 +381,10 @@ OpenDrive (https://rclone.org/opendrive/) .IP \[bu] 2 Pcloud (https://rclone.org/pcloud/) .IP \[bu] 2 +premiumize.me (https://rclone.org/premiumizeme/) +.IP \[bu] 2 +put.io (https://rclone.org/putio/) +.IP \[bu] 2 QingStor (https://rclone.org/qingstor/) .IP \[bu] 2 SFTP (https://rclone.org/sftp/) @@ -430,6 +446,9 @@ rclone\ config\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ config \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone @@ -441,6 +460,10 @@ rclone config create (https://rclone.org/commands/rclone_config_create/) rclone config delete (https://rclone.org/commands/rclone_config_delete/) \- Delete an existing remote . .IP \[bu] 2 +rclone config +disconnect (https://rclone.org/commands/rclone_config_disconnect/) \- +Disconnects user from remote +.IP \[bu] 2 rclone config dump (https://rclone.org/commands/rclone_config_dump/) \- Dump the config file as JSON. .IP \[bu] 2 @@ -458,12 +481,19 @@ rclone config providers (https://rclone.org/commands/rclone_config_providers/) \- List in JSON format all the providers and options. .IP \[bu] 2 +rclone config +reconnect (https://rclone.org/commands/rclone_config_reconnect/) \- +Re\-authenticates user with remote. +.IP \[bu] 2 rclone config show (https://rclone.org/commands/rclone_config_show/) \- Print (decrypted) config file, or the config for a single remote. .IP \[bu] 2 rclone config update (https://rclone.org/commands/rclone_config_update/) \- Update options in an existing remote. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 +.IP \[bu] 2 +rclone config +userinfo (https://rclone.org/commands/rclone_config_userinfo/) \- Prints +info about logged in user of remote. .SS rclone copy .PP Copy files from source to dest, skipping already copied @@ -553,11 +583,13 @@ rclone\ copy\ source:path\ dest:path\ [flags] \ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ copy \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone sync .PP Make source and dest identical, modifying destination only. @@ -600,11 +632,13 @@ rclone\ sync\ source:path\ dest:path\ [flags] \ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ sync \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone move .PP Move files from source to dest. @@ -652,11 +686,13 @@ rclone\ move\ source:path\ dest:path\ [flags] \ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ move \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone delete .PP Remove the contents of path. @@ -705,11 +741,13 @@ rclone\ delete\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ delete \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone purge .PP Remove the path and all of its contents. @@ -732,11 +770,13 @@ rclone\ purge\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ purge \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone mkdir .PP Make the path if it doesn't already exist. @@ -756,11 +796,13 @@ rclone\ mkdir\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ mkdir \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone rmdir .PP Remove the path if empty. @@ -782,11 +824,13 @@ rclone\ rmdir\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ rmdir \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone check .PP Checks the files in the source and destination match. @@ -825,11 +869,13 @@ rclone\ check\ source:path\ dest:path\ [flags] \ \ \ \ \ \ \-\-one\-way\ \ \ \ Check\ one\ way\ only,\ source\ files\ must\ exist\ on\ remote \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone ls .PP List the objects in the path with size and path. @@ -891,11 +937,13 @@ rclone\ ls\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ ls \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone lsd .PP List all directories/containers/buckets in the path. @@ -973,11 +1021,13 @@ rclone\ lsd\ remote:path\ [flags] \ \ \-R,\ \-\-recursive\ \ \ Recurse\ into\ the\ listing. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone lsl .PP List the objects in path with modification time, size and path. @@ -1039,11 +1089,13 @@ rclone\ lsl\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ lsl \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone md5sum .PP Produces an md5sum file for all the objects in the path. @@ -1064,11 +1116,13 @@ rclone\ md5sum\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ md5sum \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone sha1sum .PP Produces an sha1sum file for all the objects in the path. @@ -1089,11 +1143,13 @@ rclone\ sha1sum\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ sha1sum \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone size .PP Prints the total size and number of objects in remote:path. @@ -1114,11 +1170,13 @@ rclone\ size\ remote:path\ [flags] \ \ \ \ \ \ \-\-json\ \ \ format\ output\ as\ JSON \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone version .PP Show the version number. @@ -1175,11 +1233,13 @@ rclone\ version\ [flags] \ \ \-h,\ \-\-help\ \ \ \ help\ for\ version \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone cleanup .PP Clean up the remote if possible @@ -1201,11 +1261,13 @@ rclone\ cleanup\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ cleanup \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone dedupe .PP Interactively find duplicate files and delete/rename them. @@ -1340,11 +1402,13 @@ rclone\ dedupe\ [mode]\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ dedupe \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone about .PP Get quota information from the remote. @@ -1425,11 +1489,13 @@ rclone\ about\ remote:\ [flags] \ \ \ \ \ \ \-\-json\ \ \ Format\ output\ as\ JSON \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone authorize .PP Remote authorization. @@ -1451,11 +1517,13 @@ rclone\ authorize\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ authorize \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone cachestats .PP Print cache stats for a remote @@ -1475,11 +1543,13 @@ rclone\ cachestats\ source:\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ cachestats \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone cat .PP Concatenates any files and sends them to stdout. @@ -1534,11 +1604,13 @@ rclone\ cat\ remote:path\ [flags] \ \ \ \ \ \ \-\-tail\ int\ \ \ \ \ Only\ print\ the\ last\ N\ characters. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config create .PP Create a new remote with name, type and options. @@ -1585,11 +1657,13 @@ rclone\ config\ create\ \ \ [\ ]*\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ create \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config delete .PP Delete an existing remote . @@ -1609,11 +1683,43 @@ rclone\ config\ delete\ \ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ delete \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS SEE ALSO +.IP \[bu] 2 +rclone config (https://rclone.org/commands/rclone_config/) \- Enter an +interactive configuration session. +.SS rclone config disconnect +.PP +Disconnects user from remote +.SS Synopsis +.PP +This disconnects the remote: passed in to the cloud storage system. +.PP +This normally means revoking the oauth token. +.PP +To reconnect use \[lq]rclone config reconnect\[rq]. +.IP +.nf +\f[C] +rclone\ config\ disconnect\ remote:\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ disconnect +\f[] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config dump .PP Dump the config file as JSON. @@ -1633,11 +1739,13 @@ rclone\ config\ dump\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ dump \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config edit .PP Enter an interactive configuration session. @@ -1659,11 +1767,13 @@ rclone\ config\ edit\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ edit \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config file .PP Show path of configuration file in use. @@ -1683,11 +1793,13 @@ rclone\ config\ file\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ file \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config password .PP Update password in an existing remote. @@ -1719,11 +1831,13 @@ rclone\ config\ password\ \ [\ ]+\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ password \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config providers .PP List in JSON format all the providers and options. @@ -1743,11 +1857,43 @@ rclone\ config\ providers\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ providers \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS SEE ALSO +.IP \[bu] 2 +rclone config (https://rclone.org/commands/rclone_config/) \- Enter an +interactive configuration session. +.SS rclone config reconnect +.PP +Re\-authenticates user with remote. +.SS Synopsis +.PP +This reconnects remote: passed in to the cloud storage system. +.PP +To disconnect the remote use \[lq]rclone config disconnect\[rq]. +.PP +This normally means going through the interactive oauth flow again. +.IP +.nf +\f[C] +rclone\ config\ reconnect\ remote:\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ reconnect +\f[] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config show .PP Print (decrypted) config file, or the config for a single remote. @@ -1767,11 +1913,13 @@ rclone\ config\ show\ []\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ show \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone config update .PP Update options in an existing remote. @@ -1813,11 +1961,41 @@ rclone\ config\ update\ \ [\ ]+\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ update \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS SEE ALSO +.IP \[bu] 2 +rclone config (https://rclone.org/commands/rclone_config/) \- Enter an +interactive configuration session. +.SS rclone config userinfo +.PP +Prints info about logged in user of remote. +.SS Synopsis +.PP +This prints the details of the person logged in to the cloud storage +system. +.IP +.nf +\f[C] +rclone\ config\ userinfo\ remote:\ [flags] +\f[] +.fi +.SS Options +.IP +.nf +\f[C] +\ \ \-h,\ \-\-help\ \ \ help\ for\ userinfo +\ \ \ \ \ \ \-\-json\ \ \ Format\ output\ as\ JSON +\f[] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) \- Enter an interactive configuration session. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone copyto .PP Copy files from source to dest, skipping already copied @@ -1872,11 +2050,13 @@ rclone\ copyto\ source:path\ dest:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ copyto \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone copyurl .PP Copy url content to dest. @@ -1897,11 +2077,13 @@ rclone\ copyurl\ https://example.com\ dest:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ copyurl \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone cryptcheck .PP Cryptcheck checks the integrity of a crypted remote. @@ -1956,11 +2138,13 @@ rclone\ cryptcheck\ remote:path\ cryptedremote:path\ [flags] \ \ \ \ \ \ \-\-one\-way\ \ \ Check\ one\ way\ only,\ source\ files\ must\ exist\ on\ destination \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone cryptdecode .PP Cryptdecode returns unencrypted file names. @@ -1996,11 +2180,13 @@ rclone\ cryptdecode\ encryptedremote:\ encryptedfilename\ [flags] \ \ \ \ \ \ \-\-reverse\ \ \ Reverse\ cryptdecode,\ encrypts\ filenames \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone dbhashsum .PP Produces a Dropbox hash file for all the objects in the path. @@ -2023,11 +2209,13 @@ rclone\ dbhashsum\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ dbhashsum \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone deletefile .PP Remove a single file from remote. @@ -2050,11 +2238,13 @@ rclone\ deletefile\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ deletefile \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone genautocomplete .PP Output completion script for a given shell. @@ -2069,6 +2259,9 @@ Run with \[en]help to list the supported shells. \ \ \-h,\ \-\-help\ \ \ help\ for\ genautocomplete \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone @@ -2081,7 +2274,6 @@ Output bash completion script for rclone. rclone genautocomplete zsh (https://rclone.org/commands/rclone_genautocomplete_zsh/) \- Output zsh completion script for rclone. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone genautocomplete bash .PP Output bash completion script for rclone. @@ -2121,12 +2313,14 @@ rclone\ genautocomplete\ bash\ [output_file]\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ bash \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone genautocomplete (https://rclone.org/commands/rclone_genautocomplete/) \- Output completion script for a given shell. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone genautocomplete zsh .PP Output zsh completion script for rclone. @@ -2166,12 +2360,14 @@ rclone\ genautocomplete\ zsh\ [output_file]\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ zsh \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone genautocomplete (https://rclone.org/commands/rclone_genautocomplete/) \- Output completion script for a given shell. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone gendocs .PP Output markdown docs for rclone to the directory supplied. @@ -2194,11 +2390,13 @@ rclone\ gendocs\ output_directory\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ gendocs \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone hashsum .PP Produces an hashsum file for all the objects in the path. @@ -2241,11 +2439,13 @@ rclone\ hashsum\ \ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ hashsum \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone link .PP Generate public link to file/folder. @@ -2278,11 +2478,13 @@ rclone\ link\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ link \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone listremotes .PP List all the remotes in the config file. @@ -2305,11 +2507,13 @@ rclone\ listremotes\ [flags] \ \ \ \ \ \ \-\-long\ \ \ Show\ the\ type\ as\ well\ as\ names. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone lsf .PP List directories and objects in remote:path formatted for parsing @@ -2493,11 +2697,13 @@ rclone\ lsf\ remote:path\ [flags] \ \ \-s,\ \-\-separator\ string\ \ \ Separator\ for\ the\ items\ in\ the\ format.\ (default\ ";") \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone lsjson .PP List directories and objects in the path in JSON format. @@ -2603,11 +2809,13 @@ rclone\ lsjson\ remote:path\ [flags] \ \ \-R,\ \-\-recursive\ \ \ \ Recurse\ into\ the\ listing. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone mount .PP Mount the remote as file system on a mountpoint. @@ -2687,13 +2895,9 @@ rclone mount without \[lq]\[en]vfs\-cache\-mode writes\[rq] or See the File Caching section for more info. .PP The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, -Hubic) won't work from the root \- you will need to specify a bucket, or -a path within the bucket. -So \f[C]swift:\f[] won't work whereas \f[C]swift:bucket\f[] will as will -\f[C]swift:bucket/path\f[]. -None of these support the concept of directories, so empty directories -will have a tendency to disappear once they fall out of the directory -cache. +Hubic) do not support the concept of empty directories, so empty +directories will have a tendency to disappear once they fall out of the +directory cache. .PP Only supported on Linux, FreeBSD, OS X and Windows at the moment. .SS rclone mount vs rclone sync/copy @@ -2716,8 +2920,8 @@ too many callbacks to rclone from the kernel. In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much -memory (https://github.com/rclone/rclone/issues/2157), rclone not serving -files to +memory (https://github.com/rclone/rclone/issues/2157), rclone not +serving files to samba (https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) and excessive time listing directories (https://github.com/rclone/rclone/issues/2095#issuecomment-371141147). @@ -2982,11 +3186,13 @@ rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags] \ \ \ \ \ \ \-\-write\-back\-cache\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ buffer\ writes\ before\ sending\ them\ to\ rclone.\ Without\ this,\ writethrough\ caching\ is\ used. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone moveto .PP Move file or directory from source to dest. @@ -3044,11 +3250,13 @@ rclone\ moveto\ source:path\ dest:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ moveto \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone ncdu .PP Explore a remote with a text based user interface. @@ -3075,6 +3283,7 @@ Here are the keys \- press `?' to toggle the help on and off \ g\ toggle\ graph \ n,s,C\ sort\ by\ name,size,count \ d\ delete\ file/directory +\ Y\ display\ current\ path \ ^L\ refresh\ screen \ ?\ to\ toggle\ help\ on\ and\ off \ q/ESC/c\-C\ to\ quit @@ -3101,11 +3310,13 @@ rclone\ ncdu\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ ncdu \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone obscure .PP Obscure password for use in the rclone.conf @@ -3125,11 +3336,13 @@ rclone\ obscure\ password\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ obscure \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone rc .PP Run a command against a running rclone. @@ -3185,11 +3398,13 @@ rclone\ rc\ commands\ parameter\ [flags] \ \ \ \ \ \ \-\-user\ string\ \ \ Username\ to\ use\ to\ rclone\ remote\ control. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone rcat .PP Copies standard input to file on remote. @@ -3238,11 +3453,13 @@ rclone\ rcat\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ rcat \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone rcd .PP Run rclone listening to remote control commands only. @@ -3271,11 +3488,13 @@ rclone\ rcd\ *\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ rcd \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone rmdirs .PP Remove empty directories under the path. @@ -3304,11 +3523,13 @@ rclone\ rmdirs\ remote:path\ [flags] \ \ \ \ \ \ \-\-leave\-root\ \ \ Do\ not\ remove\ root\ directory\ if\ empty \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone serve .PP Serve a remote over a protocol. @@ -3338,6 +3559,9 @@ rclone\ serve\ \ [opts]\ \ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ serve \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone @@ -3360,7 +3584,6 @@ Serve the remote over SFTP. .IP \[bu] 2 rclone serve webdav (https://rclone.org/commands/rclone_serve_webdav/) \- Serve remote:path over webdav. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone serve dlna .PP Serve remote:path over DLNA @@ -3584,11 +3807,13 @@ rclone\ serve\ dlna\ remote:path\ [flags] \ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a remote over a protocol. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone serve ftp .PP Serve remote:path over FTP. @@ -3775,6 +4000,79 @@ This mode should support all normal file system operations. .PP If an upload or download fails it will be retried up to \[en]low\-level\-retries times. +.SS Auth Proxy +.PP +If you supply the parameter \f[C]\-\-auth\-proxy\ /path/to/program\f[] +then rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. +This uses a simple JSON based protocl with input on STDIN and output on +STDOUT. +.PP +There is an example program +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. +.PP +The program's job is to take a \f[C]user\f[] and \f[C]pass\f[] on the +input and turn those into the config for a backend on STDOUT in JSON +format. +This config will have any default parameters for the backend added, but +it won't use configuration from environment variables or command line +options \- it is the job of the proxy program to make a complete config. +.PP +This config generated must have this extra parameter \- \f[C]_root\f[] +\- root to use for the backend +.PP +And it may have this parameter \- \f[C]_obscure\f[] \- comma separated +strings for parameters to obscure +.PP +For example the program might take this on STDIN +.IP +.nf +\f[C] +{ +\ \ \ \ "user":\ "me", +\ \ \ \ "pass":\ "mypassword" +} +\f[] +.fi +.PP +And return this on STDOUT +.IP +.nf +\f[C] +{ +\ \ \ \ "type":\ "sftp", +\ \ \ \ "_root":\ "", +\ \ \ \ "_obscure":\ "pass", +\ \ \ \ "user":\ "me", +\ \ \ \ "pass":\ "mypassword", +\ \ \ \ "host":\ "sftp.example.com" +} +\f[] +.fi +.PP +This would mean that an SFTP backend would be created on the fly for the +\f[C]user\f[] and \f[C]pass\f[] returned in the output to the host +given. +Note that since \f[C]_obscure\f[] is set to \f[C]pass\f[], rclone will +obscure the \f[C]pass\f[] parameter before creating the backend (which +is required for sftp backends). +.PP +The progam can manipulate the supplied \f[C]user\f[] in any way, for +example to make proxy to many different sftp backends, you could make +the \f[C]user\f[] be \f[C]user\@example.com\f[] and then set the +\f[C]host\f[] to \f[C]example.com\f[] in the output and the user to +\f[C]user\f[]. +For security you'd probably want to restrict the \f[C]host\f[] to a +limited list. +.PP +Note that an internal cache is keyed on \f[C]user\f[] so only use that +for configuration, don't use \f[C]pass\f[]. +This also means that if a user's password is changed the cache will need +to expire (which takes 5 mins) before it takes effect. +.PP +This can be used to build general purpose proxies to any kind of backend +that rclone supports. .IP .nf \f[C] @@ -3786,6 +4084,7 @@ rclone\ serve\ ftp\ remote:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:2121") +\ \ \ \ \ \ \-\-auth\-proxy\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\ program\ to\ use\ to\ create\ the\ backend\ from\ the\ auth. \ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) \ \ \ \ \ \ \-\-dir\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ permissions\ (default\ 0777) \ \ \ \ \ \ \-\-file\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ File\ permissions\ (default\ 0666) @@ -3810,11 +4109,13 @@ rclone\ serve\ ftp\ remote:path\ [flags] \ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a remote over a protocol. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone serve http .PP Serve the remote over HTTP. @@ -3850,6 +4151,16 @@ Note that this is the total time for a transfer. .PP \[en]max\-header\-bytes controls the maximum number of bytes the server will accept in the HTTP header. +.PP +\[en]baseurl controls the URL prefix that rclone serves from. +By default rclone will serve from the root. +If you used \[en]baseurl \[lq]/rclone\[rq] then rclone would serve from +a URL starting with \[lq]/rclone/\[rq]. +This is useful if you wish to proxy rclone serve. +Rclone automatically inserts leading and trailing \[lq]/\[rq] on +\[en]baseurl, so \[en]baseurl \[lq]rclone\[rq], \[en]baseurl +\[lq]/rclone\[rq] and \[en]baseurl \[lq]/rclone/\[rq] are all treated +identically. .SS Authentication .PP By default this will serve files without needing a login. @@ -4059,6 +4370,7 @@ rclone\ serve\ http\ remote:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") +\ \ \ \ \ \ \-\-baseurl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Prefix\ for\ URLs\ \-\ leave\ blank\ for\ root. \ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) \ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with \ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) @@ -4089,11 +4401,13 @@ rclone\ serve\ http\ remote:path\ [flags] \ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a remote over a protocol. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone serve restic .PP Serve the remote for restic's REST API. @@ -4203,6 +4517,16 @@ Note that this is the total time for a transfer. .PP \[en]max\-header\-bytes controls the maximum number of bytes the server will accept in the HTTP header. +.PP +\[en]baseurl controls the URL prefix that rclone serves from. +By default rclone will serve from the root. +If you used \[en]baseurl \[lq]/rclone\[rq] then rclone would serve from +a URL starting with \[lq]/rclone/\[rq]. +This is useful if you wish to proxy rclone serve. +Rclone automatically inserts leading and trailing \[lq]/\[rq] on +\[en]baseurl, so \[en]baseurl \[lq]rclone\[rq], \[en]baseurl +\[lq]/rclone\[rq] and \[en]baseurl \[lq]/rclone/\[rq] are all treated +identically. .SS Authentication .PP By default this will serve files without needing a login. @@ -4252,6 +4576,7 @@ rclone\ serve\ restic\ remote:path\ [flags] \f[C] \ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") \ \ \ \ \ \ \-\-append\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ disallow\ deletion\ of\ repository\ data +\ \ \ \ \ \ \-\-baseurl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Prefix\ for\ URLs\ \-\ leave\ blank\ for\ root. \ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) \ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with \ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ restic @@ -4267,11 +4592,13 @@ rclone\ serve\ restic\ remote:path\ [flags] \ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a remote over a protocol. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone serve sftp .PP Serve the remote over SFTP. @@ -4471,6 +4798,79 @@ This mode should support all normal file system operations. .PP If an upload or download fails it will be retried up to \[en]low\-level\-retries times. +.SS Auth Proxy +.PP +If you supply the parameter \f[C]\-\-auth\-proxy\ /path/to/program\f[] +then rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. +This uses a simple JSON based protocl with input on STDIN and output on +STDOUT. +.PP +There is an example program +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. +.PP +The program's job is to take a \f[C]user\f[] and \f[C]pass\f[] on the +input and turn those into the config for a backend on STDOUT in JSON +format. +This config will have any default parameters for the backend added, but +it won't use configuration from environment variables or command line +options \- it is the job of the proxy program to make a complete config. +.PP +This config generated must have this extra parameter \- \f[C]_root\f[] +\- root to use for the backend +.PP +And it may have this parameter \- \f[C]_obscure\f[] \- comma separated +strings for parameters to obscure +.PP +For example the program might take this on STDIN +.IP +.nf +\f[C] +{ +\ \ \ \ "user":\ "me", +\ \ \ \ "pass":\ "mypassword" +} +\f[] +.fi +.PP +And return this on STDOUT +.IP +.nf +\f[C] +{ +\ \ \ \ "type":\ "sftp", +\ \ \ \ "_root":\ "", +\ \ \ \ "_obscure":\ "pass", +\ \ \ \ "user":\ "me", +\ \ \ \ "pass":\ "mypassword", +\ \ \ \ "host":\ "sftp.example.com" +} +\f[] +.fi +.PP +This would mean that an SFTP backend would be created on the fly for the +\f[C]user\f[] and \f[C]pass\f[] returned in the output to the host +given. +Note that since \f[C]_obscure\f[] is set to \f[C]pass\f[], rclone will +obscure the \f[C]pass\f[] parameter before creating the backend (which +is required for sftp backends). +.PP +The progam can manipulate the supplied \f[C]user\f[] in any way, for +example to make proxy to many different sftp backends, you could make +the \f[C]user\f[] be \f[C]user\@example.com\f[] and then set the +\f[C]host\f[] to \f[C]example.com\f[] in the output and the user to +\f[C]user\f[]. +For security you'd probably want to restrict the \f[C]host\f[] to a +limited list. +.PP +Note that an internal cache is keyed on \f[C]user\f[] so only use that +for configuration, don't use \f[C]pass\f[]. +This also means that if a user's password is changed the cache will need +to expire (which takes 5 mins) before it takes effect. +.PP +This can be used to build general purpose proxies to any kind of backend +that rclone supports. .IP .nf \f[C] @@ -4482,6 +4882,7 @@ rclone\ serve\ sftp\ remote:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:2022") +\ \ \ \ \ \ \-\-auth\-proxy\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\ program\ to\ use\ to\ create\ the\ backend\ from\ the\ auth. \ \ \ \ \ \ \-\-authorized\-keys\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Authorized\ keys\ file\ (default\ "~/.ssh/authorized_keys") \ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) \ \ \ \ \ \ \-\-dir\-perms\ FileMode\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ permissions\ (default\ 0777) @@ -4507,11 +4908,13 @@ rclone\ serve\ sftp\ remote:path\ [flags] \ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a remote over a protocol. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone serve webdav .PP Serve remote:path over webdav. @@ -4550,6 +4953,16 @@ Note that this is the total time for a transfer. .PP \[en]max\-header\-bytes controls the maximum number of bytes the server will accept in the HTTP header. +.PP +\[en]baseurl controls the URL prefix that rclone serves from. +By default rclone will serve from the root. +If you used \[en]baseurl \[lq]/rclone\[rq] then rclone would serve from +a URL starting with \[lq]/rclone/\[rq]. +This is useful if you wish to proxy rclone serve. +Rclone automatically inserts leading and trailing \[lq]/\[rq] on +\[en]baseurl, so \[en]baseurl \[lq]rclone\[rq], \[en]baseurl +\[lq]/rclone\[rq] and \[en]baseurl \[lq]/rclone/\[rq] are all treated +identically. .SS Authentication .PP By default this will serve files without needing a login. @@ -4748,6 +5161,79 @@ This mode should support all normal file system operations. .PP If an upload or download fails it will be retried up to \[en]low\-level\-retries times. +.SS Auth Proxy +.PP +If you supply the parameter \f[C]\-\-auth\-proxy\ /path/to/program\f[] +then rclone will use that program to generate backends on the fly which +then are used to authenticate incoming requests. +This uses a simple JSON based protocl with input on STDIN and output on +STDOUT. +.PP +There is an example program +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +in the rclone source code. +.PP +The program's job is to take a \f[C]user\f[] and \f[C]pass\f[] on the +input and turn those into the config for a backend on STDOUT in JSON +format. +This config will have any default parameters for the backend added, but +it won't use configuration from environment variables or command line +options \- it is the job of the proxy program to make a complete config. +.PP +This config generated must have this extra parameter \- \f[C]_root\f[] +\- root to use for the backend +.PP +And it may have this parameter \- \f[C]_obscure\f[] \- comma separated +strings for parameters to obscure +.PP +For example the program might take this on STDIN +.IP +.nf +\f[C] +{ +\ \ \ \ "user":\ "me", +\ \ \ \ "pass":\ "mypassword" +} +\f[] +.fi +.PP +And return this on STDOUT +.IP +.nf +\f[C] +{ +\ \ \ \ "type":\ "sftp", +\ \ \ \ "_root":\ "", +\ \ \ \ "_obscure":\ "pass", +\ \ \ \ "user":\ "me", +\ \ \ \ "pass":\ "mypassword", +\ \ \ \ "host":\ "sftp.example.com" +} +\f[] +.fi +.PP +This would mean that an SFTP backend would be created on the fly for the +\f[C]user\f[] and \f[C]pass\f[] returned in the output to the host +given. +Note that since \f[C]_obscure\f[] is set to \f[C]pass\f[], rclone will +obscure the \f[C]pass\f[] parameter before creating the backend (which +is required for sftp backends). +.PP +The progam can manipulate the supplied \f[C]user\f[] in any way, for +example to make proxy to many different sftp backends, you could make +the \f[C]user\f[] be \f[C]user\@example.com\f[] and then set the +\f[C]host\f[] to \f[C]example.com\f[] in the output and the user to +\f[C]user\f[]. +For security you'd probably want to restrict the \f[C]host\f[] to a +limited list. +.PP +Note that an internal cache is keyed on \f[C]user\f[] so only use that +for configuration, don't use \f[C]pass\f[]. +This also means that if a user's password is changed the cache will need +to expire (which takes 5 mins) before it takes effect. +.PP +This can be used to build general purpose proxies to any kind of backend +that rclone supports. .IP .nf \f[C] @@ -4759,6 +5245,8 @@ rclone\ serve\ webdav\ remote:path\ [flags] .nf \f[C] \ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080") +\ \ \ \ \ \ \-\-auth\-proxy\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\ program\ to\ use\ to\ create\ the\ backend\ from\ the\ auth. +\ \ \ \ \ \ \-\-baseurl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Prefix\ for\ URLs\ \-\ leave\ blank\ for\ root. \ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) \ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with \ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s) @@ -4791,11 +5279,13 @@ rclone\ serve\ webdav\ remote:path\ [flags] \ \ \ \ \ \ \-\-vfs\-read\-chunk\-size\-limit\ SizeSuffix\ \ \ If\ greater\ than\ \-\-vfs\-read\-chunk\-size,\ double\ the\ chunk\ size\ after\ each\ chunk\ read,\ until\ the\ limit\ is\ reached.\ \[aq]off\[aq]\ is\ unlimited.\ (default\ off) \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone serve (https://rclone.org/commands/rclone_serve/) \- Serve a remote over a protocol. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone settier .PP Changes storage class/tier of objects in remote. @@ -4850,11 +5340,13 @@ rclone\ settier\ tier\ remote:path\ [flags] \ \ \-h,\ \-\-help\ \ \ help\ for\ settier \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone touch .PP Create new file or change file modification time. @@ -4876,11 +5368,13 @@ rclone\ touch\ remote:path\ [flags] \ \ \-t,\ \-\-timestamp\ string\ \ \ Change\ the\ modification\ times\ to\ the\ specified\ time\ instead\ of\ the\ current\ time\ of\ day.\ The\ argument\ is\ of\ the\ form\ \[aq]YYMMDD\[aq]\ (ex.\ 17.10.30)\ or\ \[aq]YYYY\-MM\-DDTHH:MM:SS\[aq]\ (ex.\ 2006\-01\-02T15:04:05) \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS rclone tree .PP List the contents of the remote in a tree like fashion. @@ -4947,11 +5441,13 @@ rclone\ tree\ remote:path\ [flags] \ \ \ \ \ \ \-\-version\ \ \ \ \ \ \ \ \ Sort\ files\ alphanumerically\ by\ version. \f[] .fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. .SS SEE ALSO .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) \- Show help for rclone commands, flags and backends. -.SS Auto generated by spf13/cobra on 15\-Jun\-2019 .SS Copying single files .PP rclone normally syncs or copies directories. @@ -5235,6 +5731,8 @@ any files which would have been updated or deleted will be stored in If running rclone from a script you might want to use today's date as the directory name passed to \f[C]\-\-backup\-dir\f[] to store the old files, or you might want to pass \f[C]\-\-suffix\f[] with today's date. +.PP +See \f[C]\-\-compare\-dest\f[] and \f[C]\-\-copy\-dest\f[]. .SS \[en]bind string .PP Local address to bind to for outgoing connections. @@ -5370,6 +5868,19 @@ run much quicker than without the \f[C]\-\-checksum\f[] flag. .PP When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally. +.SS \[en]compare\-dest=DIR +.PP +When using \f[C]sync\f[], \f[C]copy\f[] or \f[C]move\f[] DIR is checked +in addition to the destination for files. +If a file identical to the source is found that file is NOT copied from +source. +This is useful to copy just files that have changed since the last +backup. +.PP +You must use the same remote as the destination of the sync. +The compare directory must not overlap the destination directory. +.PP +See \f[C]\-\-copy\-dest\f[] and \f[C]\-\-backup\-dir\f[]. .SS \[en]config=CONFIG_FILE .PP Specify the location of the rclone config file. @@ -5399,6 +5910,19 @@ seconds, \f[C]10m\f[] for 10 minutes, or \f[C]3h30m\f[]. The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is \f[C]1m\f[] by default. +.SS \[en]copy\-dest=DIR +.PP +When using \f[C]sync\f[], \f[C]copy\f[] or \f[C]move\f[] DIR is checked +in addition to the destination for files. +If a file identical to the source is found that file is server side +copied from DIR to the destination. +This is useful for incremental backup. +.PP +The remote in use must support server side copy and you must use the +same remote as the destination of the sync. +The compare directory must not overlap the destination directory. +.PP +See \f[C]\-\-compare\-dest\f[] and \f[C]\-\-backup\-dir\f[]. .SS \[en]dedupe\-mode MODE .PP Mode to run dedupe command in. @@ -5548,6 +6072,10 @@ It outputs warnings and significant events. .PP \f[C]ERROR\f[] is equivalent to \f[C]\-q\f[]. It only outputs error messages. +.SS \[en]use\-json\-log +.PP +This switches the log format to JSON for rclone. +The fields of json log are level, msg, source, time. .SS \[en]low\-level\-retries NUMBER .PP This controls the number of low level retries rclone does. @@ -5650,6 +6178,10 @@ Multi thread downloads will be used with \f[C]rclone\ mount\f[] and .PP \f[B]NB\f[] that this \f[B]only\f[] works for a local destination but will work with any source. +.PP +\f[B]NB\f[] that multi thread copies are disabled for local to local +copies as they are faster without unless +\f[C]\-\-multi\-thread\-streams\f[] is set explicitly. .SS \[en]multi\-thread\-streams=N .PP When using multi thread downloads (see above @@ -5829,12 +6361,29 @@ So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s. The default is \f[C]bytes\f[]. .SS \[en]suffix=SUFFIX .PP -This is for use with \f[C]\-\-backup\-dir\f[] only. -If this isn't set then \f[C]\-\-backup\-dir\f[] will move files with -their original name. -If it is set then the files will have SUFFIX added on to them. +When using \f[C]sync\f[], \f[C]copy\f[] or \f[C]move\f[] any files which +would have been overwritten or deleted will have the suffix added to +them. +If there is a file with the same path (after the suffix has been added), +then it will be overwritten. .PP +The remote in use must support server side move or copy and you must use +the same remote as the destination of the sync. +.PP +This is for use with files to add the suffix in the current directory or +with \f[C]\-\-backup\-dir\f[]. See \f[C]\-\-backup\-dir\f[] for more info. +.PP +For example +.IP +.nf +\f[C] +rclone\ sync\ /path/to/local/file\ remote:current\ \-\-suffix\ .bak +\f[] +.fi +.PP +will sync \f[C]/path/to/local\f[] to \f[C]remote:current\f[], but for +any files which would have been updated or deleted have .bak added. .SS \[en]suffix\-keep\-extension .PP When using \f[C]\-\-suffix\f[], setting this causes rclone put the @@ -6008,15 +6557,18 @@ If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. .PP -On remotes which don't support mod time directly the time checked will -be the uploaded time. +On remotes which don't support mod time directly (or when using +\f[C]\-\-use\-server\-mod\-time\f[]) the time checked will be the +uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. .PP This can be useful when transferring to a remote which doesn't support -mod times directly as it is more accurate than a \f[C]\-\-size\-only\f[] -check and faster than using \f[C]\-\-checksum\f[]. +mod times directly (or when using \f[C]\-\-use\-server\-mod\-time\f[] to +avoid extra API calls) as it is more accurate than a +\f[C]\-\-size\-only\f[] check and faster than using +\f[C]\-\-checksum\f[]. .SS \[en]use\-mmap .PP If this flag is set then rclone will use anonymous memory allocated by @@ -6042,10 +6594,16 @@ modtime is needed by an operation. .PP Use this flag to disable the extra API call and rely instead on the server's modified time. -In cases such as a local to remote sync, knowing the local file is newer -than the time it was last uploaded to the remote is sufficient. +In cases such as a local to remote sync using \f[C]\-\-update\f[], +knowing the local file is newer than the time it was last uploaded to +the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary. +.PP +Using this flag on a sync operation without also using +\f[C]\-\-update\f[] would cause all files modified at any time other +than the last upload time to be uploaded again, which is probably not +what you want. .SS \-v, \-vv, \[en]verbose .PP With \f[C]\-v\f[] rclone will tell you about each file that is @@ -6966,7 +7524,7 @@ This will transfer these files only (if they exist) .nf \f[C] /home/me/pics/file1.jpg\ \ \ \ \ \ \ \ →\ remote:pics/file1.jpg -/home/me/pics/subdir/file2.jpg\ →\ remote:pics/subdirfile1.jpg +/home/me/pics/subdir/file2.jpg\ →\ remote:pics/subdir/file2.jpg \f[] .fi .PP @@ -7008,7 +7566,7 @@ the \f[C]files\-from.txt\f[] like this: \f[C] /home/user1/important\ →\ remote:backup/user1/important /home/user1/dir/file\ \ →\ remote:backup/user1/dir/file -/home/user2/stuff\ \ \ \ \ →\ remote:backup/stuff +/home/user2/stuff\ \ \ \ \ →\ remote:backup/user2/stuff \f[] .fi .PP @@ -7036,9 +7594,9 @@ remote: .IP .nf \f[C] -/home/user1/important\ →\ remote:home/backup/user1/important -/home/user1/dir/file\ \ →\ remote:home/backup/user1/dir/file -/home/user2/stuff\ \ \ \ \ →\ remote:home/backup/stuff +/home/user1/important\ →\ remote:backup/home/user1/important +/home/user1/dir/file\ \ →\ remote:backup/home/user1/dir/file +/home/user2/stuff\ \ \ \ \ →\ remote:backup/home/user2/stuff \f[] .fi .SS \f[C]\-\-min\-size\f[] \- Don't transfer any file smaller than this @@ -7181,6 +7739,135 @@ rclone\ sync\ \-\-exclude\-if\-present\ .ignore\ dir1\ remote:backup .PP Currently only one filename is supported, i.e. \f[C]\-\-exclude\-if\-present\f[] should not be used multiple times. +.SH GUI (Experimental) +.PP +Rclone can serve a web based GUI (graphical user interface). +This is somewhat experimental at the moment so things may be subject to +change. +.PP +Run this command in a terminal and rclone will download and then display +the GUI in a web browser. +.IP +.nf +\f[C] +rclone\ rcd\ \-\-rc\-web\-gui +\f[] +.fi +.PP +This will produce logs like this and rclone needs to continue to run to +serve the GUI: +.IP +.nf +\f[C] +2019/08/25\ 11:40:14\ NOTICE:\ A\ new\ release\ for\ gui\ is\ present\ at\ https://github.com/rclone/rclone\-webui\-react/releases/download/v0.0.6/currentbuild.zip +2019/08/25\ 11:40:14\ NOTICE:\ Downloading\ webgui\ binary.\ Please\ wait.\ [Size:\ 3813937,\ Path\ :\ \ /home/USER/.cache/rclone/webgui/v0.0.6.zip] +2019/08/25\ 11:40:16\ NOTICE:\ Unzipping +2019/08/25\ 11:40:16\ NOTICE:\ Serving\ remote\ control\ on\ http://127.0.0.1:5572/ +\f[] +.fi +.PP +This assumes you are running rclone locally on your machine. +It is possible to separate the rclone and the GUI \- see below for +details. +.PP +If you wish to update to the latest API version then you can add +\f[C]\-\-rc\-web\-gui\-update\f[] to the command line. +.SS Using the GUI +.PP +Once the GUI opens, you will be looking at the dashboard which has an +overall overview. +.PP +On the left hand side you will see a series of view buttons you can +click on: +.IP \[bu] 2 +Dashboard \- main overview +.IP \[bu] 2 +Configs \- examine and create new configurations +.IP \[bu] 2 +Explorer \- view, download and upload files to the cloud storage systems +.IP \[bu] 2 +Backend \- view or alter the backend config +.IP \[bu] 2 +Log out +.PP +(More docs and walkthrough video to come!) +.SS How it works +.PP +When you run the \f[C]rclone\ rcd\ \-\-rc\-web\-gui\f[] this is what +happens +.IP \[bu] 2 +Rclone starts but only runs the remote control API (\[lq]rc\[rq]). +.IP \[bu] 2 +The API is bound to localhost with an auto generated username and +password. +.IP \[bu] 2 +If the API bundle is missing then rclone will download it. +.IP \[bu] 2 +rclone will start serving the files from the API bundle over the same +port as the API +.IP \[bu] 2 +rclone will open the browser with a \f[C]login_token\f[] so it can log +straight in. +.SS Advanced use +.PP +The \f[C]rclone\ rcd\f[] may use any of the flags documented on the rc +page (https://rclone.org/rc/#supported-parameters). +.PP +The flag \f[C]\-\-rc\-web\-gui\f[] is shorthand for +.IP \[bu] 2 +Download the web GUI if necessary +.IP \[bu] 2 +Check we are using some authentication +.IP \[bu] 2 +\f[C]\-\-rc\-user\ gui\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-pass\ \f[] +.IP \[bu] 2 +\f[C]\-\-rc\-serve\f[] +.PP +These flags can be overidden as desired. +.PP +See also the rclone rcd +documentation (https://rclone.org/commands/rclone_rcd/). +.SS Example: Running a public GUI +.PP +For example the GUI could be served on a public port over SSL using an +htpasswd file using the following flags: +.IP \[bu] 2 +\f[C]\-\-rc\-web\-gui\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-addr\ :443\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-htpasswd\ /path/to/htpasswd\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-cert\ /path/to/ssl.crt\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-key\ /path/to/ssl.key\f[] +.SS Example: Running a GUI behind a proxy +.PP +If you want to run the GUI behind a proxy at \f[C]/rclone\f[] you could +use these flags: +.IP \[bu] 2 +\f[C]\-\-rc\-web\-gui\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-baseurl\ rclone\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-htpasswd\ /path/to/htpasswd\f[] +.PP +Or instead of htpassword if you just want a single user and password: +.IP \[bu] 2 +\f[C]\-\-rc\-user\ me\f[] +.IP \[bu] 2 +\f[C]\-\-rc\-pass\ mypassword\f[] +.SS Project +.PP +The GUI is being developed in the: rclone/rclone\-webui\-react +respository (https://github.com/rclone/rclone-webui-react). +.PP +Bug reports and contributions very welcome welcome :\-) +.PP +If you have questions then please ask them on the rclone +forum (https://forum.rclone.org/). .SH Remote controlling rclone .PP If rclone is run with the \f[C]\-\-rc\f[] flag then it starts an http @@ -7252,6 +7939,31 @@ that is opened will have the authorization in the URL in the \f[C]http://user:pass\@localhost/\f[] style. .PP Default Off. +.SS \[en]rc\-web\-gui +.PP +Set this flag to serve the default web gui on the same port as rclone. +.PP +Default Off. +.SS \[en]rc\-allow\-origin +.PP +Set the allowed Access\-Control\-Allow\-Origin for rc requests. +.PP +Can be used with \[en]rc\-web\-gui if the rclone is running on different +IP than the web\-gui. +.PP +Default is IP address on which rc is running. +.SS \[en]rc\-web\-fetch\-url +.PP +Set the URL to fetch the rclone\-web\-gui files from. +.PP +Default +https://api.github.com/repos/rclone/rclone\-webui\-react/releases/latest. +.SS \[en]rc\-web\-gui\-update +.PP +Set this flag to Download / Force update rclone\-webui\-react from the +rc\-web\-fetch\-url. +.PP +Default Off. .SS \[en]rc\-job\-expire\-duration=DURATION .PP Expire finished async jobs older than DURATION (default 60s). @@ -7319,6 +8031,10 @@ The rc interface supports some special parameters which apply to These start with \f[C]_\f[] to show they are different. .SS Running asynchronous jobs with _async = true .PP +Each rc call is classified as a job and it is assigned its own id. +By default jobs are executed immediately as they are created or +synchronously. +.PP If \f[C]_async\f[] has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. @@ -7387,8 +8103,30 @@ $\ rclone\ rc\ job/list } \f[] .fi +.SS Assigning operations to groups with _group = +.PP +Each rc call has it's own stats group for tracking it's metrics. +By default grouping is done by the composite group name from prefix +\f[C]job/\f[] and id of the job like so \f[C]job/1\f[]. +.PP +If \f[C]_group\f[] has a value then stats for that request will be +grouped under that value. +This allows caller to group stats under their own name. +.PP +Stats for specific group can be accessed by passing \f[C]group\f[] to +\f[C]core/stats\f[]: +.IP +.nf +\f[C] +$\ rclone\ rc\ \-\-json\ \[aq]{\ "group":\ "job/1"\ }\[aq]\ core/stats +{ +\ \ \ \ "speed":\ 12345 +\ \ \ \ ... +} +\f[] +.fi .SS Supported commands -.SS cache/expire: Purge a remote from cache +.SS cache/expire: Purge a remote from cache {#cache/expire} .PP Purge a remote from the cache backend. Supports either a directory or a file. @@ -7403,7 +8141,7 @@ rclone\ rc\ cache/expire\ remote=path/to/sub/folder/ rclone\ rc\ cache/expire\ remote=/\ withData=true \f[] .fi -.SS cache/fetch: Fetch file chunks +.SS cache/fetch: Fetch file chunks {#cache/fetch} .PP Ensure the specified file chunks are cached on disk. .PP @@ -7434,10 +8172,10 @@ rclone\ rc\ cache/fetch\ chunks=0\ file=hello\ file2=home/goodbye .PP File names will automatically be encrypted when the a crypt remote is used on top of the cache. -.SS cache/stats: Get cache stats +.SS cache/stats: Get cache stats {#cache/stats} .PP Show statistics for the cache remote. -.SS config/create: create the config for a remote. +.SS config/create: create the config for a remote. {#config/create} .PP This takes the following parameters .IP \[bu] 2 @@ -7450,7 +8188,7 @@ command (https://rclone.org/commands/rclone_config_create/) command for more information on the above. .PP Authentication is required for this call. -.SS config/delete: Delete a remote in the config file. +.SS config/delete: Delete a remote in the config file. {#config/delete} .PP Parameters: \- name \- name of remote to delete .PP @@ -7459,7 +8197,7 @@ command (https://rclone.org/commands/rclone_config_delete/) command for more information on the above. .PP Authentication is required for this call. -.SS config/dump: Dumps the config file. +.SS config/dump: Dumps the config file. {#config/dump} .PP Returns a JSON object: \- key: value .PP @@ -7470,7 +8208,7 @@ command (https://rclone.org/commands/rclone_config_dump/) command for more information on the above. .PP Authentication is required for this call. -.SS config/get: Get a remote in the config file. +.SS config/get: Get a remote in the config file. {#config/get} .PP Parameters: \- name \- name of remote to get .PP @@ -7480,6 +8218,7 @@ more information on the above. .PP Authentication is required for this call. .SS config/listremotes: Lists the remotes in the config file. +{#config/listremotes} .PP Returns \- remotes \- array of remote names .PP @@ -7489,6 +8228,7 @@ more information on the above. .PP Authentication is required for this call. .SS config/password: password the config for a remote. +{#config/password} .PP This takes the following parameters .IP \[bu] 2 @@ -7500,7 +8240,7 @@ for more information on the above. .PP Authentication is required for this call. .SS config/providers: Shows how providers are configured in the config -file. +file. {#config/providers} .PP Returns a JSON object: \- providers \- array of objects .PP @@ -7509,7 +8249,7 @@ command (https://rclone.org/commands/rclone_config_providers/) command for more information on the above. .PP Authentication is required for this call. -.SS config/update: update the config for a remote. +.SS config/update: update the config for a remote. {#config/update} .PP This takes the following parameters .IP \[bu] 2 @@ -7520,7 +8260,7 @@ command (https://rclone.org/commands/rclone_config_update/) command for more information on the above. .PP Authentication is required for this call. -.SS core/bwlimit: Set the bandwidth limit. +.SS core/bwlimit: Set the bandwidth limit. {#core/bwlimit} .PP This sets the bandwidth limit to that passed in. .PP @@ -7528,93 +8268,156 @@ Eg .IP .nf \f[C] -rclone\ rc\ core/bwlimit\ rate=1M rclone\ rc\ core/bwlimit\ rate=off +{ +\ \ \ \ "bytesPerSecond":\ \-1, +\ \ \ \ "rate":\ "off" +} +rclone\ rc\ core/bwlimit\ rate=1M +{ +\ \ \ \ "bytesPerSecond":\ 1048576, +\ \ \ \ "rate":\ "1M" +} +\f[] +.fi +.PP +If the rate parameter is not suppied then the bandwidth is queried +.IP +.nf +\f[C] +rclone\ rc\ core/bwlimit +{ +\ \ \ \ "bytesPerSecond":\ 1048576, +\ \ \ \ "rate":\ "1M" +} \f[] .fi .PP The format of the parameter is exactly the same as passed to \[en]bwlimit except only one bandwidth may be specified. -.SS core/gc: Runs a garbage collection. +.PP +In either case \[lq]rate\[rq] is returned as a human readable string, +and \[lq]bytesPerSecond\[rq] is returned as a number. +.SS core/gc: Runs a garbage collection. {#core/gc} .PP This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems. -.SS core/memstats: Returns the memory statistics +.SS core/group\-list: Returns list of stats. {#core/group\-list} .PP -This returns the memory statistics of the running program. -What the values mean are explained in the go docs: -https://golang.org/pkg/runtime/#MemStats -.PP -The most interesting values for most people are: -.IP \[bu] 2 -HeapAlloc: This is the amount of memory rclone is actually using -.IP \[bu] 2 -HeapSys: This is the amount of memory rclone has obtained from the OS -.IP \[bu] 2 -Sys: this is the total amount of memory requested from the OS -.RS 2 -.IP \[bu] 2 -It is virtual memory so may include unused memory -.RE -.SS core/obscure: Obscures a string passed in. -.PP -Pass a clear string and rclone will obscure it for the config file: \- -clear \- string -.PP -Returns \- obscured \- string -.SS core/pid: Return PID of current process -.PP -This returns PID of current process. -Useful for stopping rclone process. -.SS core/stats: Returns stats about current transfers. -.PP -This returns all available stats -.IP -.nf -\f[C] -rclone\ rc\ core/stats -\f[] -.fi +This returns list of stats groups currently in memory. .PP Returns the following values: .IP .nf \f[C] { -\ \ \ \ "speed":\ average\ speed\ in\ bytes/sec\ since\ start\ of\ the\ process, -\ \ \ \ "bytes":\ total\ transferred\ bytes\ since\ the\ start\ of\ the\ process, -\ \ \ \ "errors":\ number\ of\ errors, -\ \ \ \ "fatalError":\ whether\ there\ has\ been\ at\ least\ one\ FatalError, -\ \ \ \ "retryError":\ whether\ there\ has\ been\ at\ least\ one\ non\-NoRetryError, -\ \ \ \ "checks":\ number\ of\ checked\ files, -\ \ \ \ "transfers":\ number\ of\ transferred\ files, -\ \ \ \ "deletes"\ :\ number\ of\ deleted\ files, -\ \ \ \ "elapsedTime":\ time\ in\ seconds\ since\ the\ start\ of\ the\ process, -\ \ \ \ "lastError":\ last\ occurred\ error, -\ \ \ \ "transferring":\ an\ array\ of\ currently\ active\ file\ transfers: +\ \ \ \ "groups":\ \ an\ array\ of\ group\ names: \ \ \ \ \ \ \ \ [ -\ \ \ \ \ \ \ \ \ \ \ \ { -\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "bytes":\ total\ transferred\ bytes\ for\ this\ file, -\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "eta":\ estimated\ time\ in\ seconds\ until\ file\ transfer\ completion -\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "name":\ name\ of\ the\ file, -\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "percentage":\ progress\ of\ the\ file\ transfer\ in\ percent, -\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "speed":\ speed\ in\ bytes/sec, -\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "speedAvg":\ speed\ in\ bytes/sec\ as\ an\ exponentially\ weighted\ moving\ average, -\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "size":\ size\ of\ the\ file\ in\ bytes -\ \ \ \ \ \ \ \ \ \ \ \ } -\ \ \ \ \ \ \ \ ], -\ \ \ \ "checking":\ an\ array\ of\ names\ of\ currently\ active\ file\ checks -\ \ \ \ \ \ \ \ [] +\ \ \ \ \ \ \ \ \ \ \ \ "group1", +\ \ \ \ \ \ \ \ \ \ \ \ "group2", +\ \ \ \ \ \ \ \ \ \ \ \ ... +\ \ \ \ \ \ \ \ ] } + +###\ core/memstats:\ Returns\ the\ memory\ statistics\ {#core/memstats} + +This\ returns\ the\ memory\ statistics\ of\ the\ running\ program.\ \ What\ the\ values\ mean +are\ explained\ in\ the\ go\ docs:\ https://golang.org/pkg/runtime/#MemStats + +The\ most\ interesting\ values\ for\ most\ people\ are: + +*\ HeapAlloc:\ This\ is\ the\ amount\ of\ memory\ rclone\ is\ actually\ using +*\ HeapSys:\ This\ is\ the\ amount\ of\ memory\ rclone\ has\ obtained\ from\ the\ OS +*\ Sys:\ this\ is\ the\ total\ amount\ of\ memory\ requested\ from\ the\ OS +\ \ *\ It\ is\ virtual\ memory\ so\ may\ include\ unused\ memory + +###\ core/obscure:\ Obscures\ a\ string\ passed\ in.\ {#core/obscure} + +Pass\ a\ clear\ string\ and\ rclone\ will\ obscure\ it\ for\ the\ config\ file: +\-\ clear\ \-\ string + +Returns +\-\ obscured\ \-\ string + +###\ core/pid:\ Return\ PID\ of\ current\ process\ {#core/pid} + +This\ returns\ PID\ of\ current\ process. +Useful\ for\ stopping\ rclone\ process. + +###\ core/stats:\ Returns\ stats\ about\ current\ transfers.\ {#core/stats} + +This\ returns\ all\ available\ stats: + +\ \ \ \ rclone\ rc\ core/stats + +If\ group\ is\ not\ provided\ then\ summed\ up\ stats\ for\ all\ groups\ will\ be +returned. + +Parameters +\-\ group\ \-\ name\ of\ the\ stats\ group\ (string) + +Returns\ the\ following\ values: \f[] .fi .PP -Values for \[lq]transferring\[rq], \[lq]checking\[rq] and -\[lq]lastError\[rq] are only assigned if data is available. -The value for \[lq]eta\[rq] is null if an eta cannot be determined. +{ \[lq]speed\[rq]: average speed in bytes/sec since start of the +process, \[lq]bytes\[rq]: total transferred bytes since the start of the +process, \[lq]errors\[rq]: number of errors, \[lq]fatalError\[rq]: +whether there has been at least one FatalError, \[lq]retryError\[rq]: +whether there has been at least one non\-NoRetryError, \[lq]checks\[rq]: +number of checked files, \[lq]transfers\[rq]: number of transferred +files, \[lq]deletes\[rq] : number of deleted files, +\[lq]elapsedTime\[rq]: time in seconds since the start of the process, +\[lq]lastError\[rq]: last occurred error, \[lq]transferring\[rq]: an +array of currently active file transfers: [ { \[lq]bytes\[rq]: total +transferred bytes for this file, \[lq]eta\[rq]: estimated time in +seconds until file transfer completion \[lq]name\[rq]: name of the file, +\[lq]percentage\[rq]: progress of the file transfer in percent, +\[lq]speed\[rq]: speed in bytes/sec, \[lq]speedAvg\[rq]: speed in +bytes/sec as an exponentially weighted moving average, \[lq]size\[rq]: +size of the file in bytes } ], \[lq]checking\[rq]: an array of names of +currently active file checks [] } +.IP +.nf +\f[C] +Values\ for\ "transferring",\ "checking"\ and\ "lastError"\ are\ only\ assigned\ if\ data\ is\ available. +The\ value\ for\ "eta"\ is\ null\ if\ an\ eta\ cannot\ be\ determined. + +###\ core/stats\-reset:\ Reset\ stats.\ {#core/stats\-reset} + +This\ clears\ counters\ and\ errors\ for\ all\ stats\ or\ specific\ stats\ group\ if\ group +is\ provided. + +Parameters +\-\ group\ \-\ name\ of\ the\ stats\ group\ (string) + +###\ core/transferred:\ Returns\ stats\ about\ completed\ transfers.\ {#core/transferred} + +This\ returns\ stats\ about\ completed\ transfers: + +\ \ \ \ rclone\ rc\ core/transferred + +If\ group\ is\ not\ provided\ then\ completed\ transfers\ for\ all\ groups\ will\ be +returned. + +Parameters +\-\ group\ \-\ name\ of\ the\ stats\ group\ (string) + +Returns\ the\ following\ values: +\f[] +.fi +.PP +{ \[lq]transferred\[rq]: an array of completed transfers (including +failed ones): [ { \[lq]name\[rq]: name of the file, \[lq]size\[rq]: size +of the file in bytes, \[lq]bytes\[rq]: total transferred bytes for this +file, \[lq]checked\[rq]: if the transfer is only checked (skipped, +deleted), \[lq]timestamp\[rq]: integer representing millisecond unix +epoch, \[lq]error\[rq]: string description of the error (empty if +successfull), \[lq]jobid\[rq]: id of the job that this transfer belongs +to } ] } .SS core/version: Shows the current version of rclone and the go -runtime. +runtime. {#core/version} .PP This shows the current version of go and the go runtime \- version \- rclone version, eg \[lq]v1.44\[rq] \- decomposed \- version number as @@ -7623,12 +8426,12 @@ for a git compiled version \- isGit \- boolean \- true if this was compiled from the git version \- os \- OS in use as according to Go \- arch \- cpu architecture in use according to Go \- goVersion \- version of Go runtime in use -.SS job/list: Lists the IDs of the running jobs +.SS job/list: Lists the IDs of the running jobs {#job/list} .PP Parameters \- None .PP Results \- jobids \- array of integer job ids -.SS job/status: Reads the status of the job ID +.SS job/status: Reads the status of the job ID {#job/status} .PP Parameters \- jobid \- id of the job (integer) .PP @@ -7639,8 +8442,13 @@ the job or empty string for no error \- finished \- boolean whether the job has finished or not \- id \- as passed in above \- startTime \- time the job started (eg \[lq]2018\-10\-26T18:50:20.528336039+01:00\[rq]) \- success \- boolean \- true for success false otherwise \- output \- -output of the job as would have been returned if called synchronously +output of the job as would have been returned if called synchronously \- +progress \- output of the progress related to the underlying job +.SS job/stop: Stop the running job {#job/stop} +.PP +Parameters \- jobid \- id of the job (integer) .SS operations/about: Return the space used on the remote +{#operations/about} .PP This takes the following parameters .IP \[bu] 2 @@ -7653,6 +8461,7 @@ for more information on the above. .PP Authentication is required for this call. .SS operations/cleanup: Remove trashed files in the remote or path +{#operations/cleanup} .PP This takes the following parameters .IP \[bu] 2 @@ -7663,7 +8472,7 @@ command for more information on the above. .PP Authentication is required for this call. .SS operations/copyfile: Copy a file from source remote to destination -remote +remote {#operations/copyfile} .PP This takes the following parameters .IP \[bu] 2 @@ -7678,7 +8487,7 @@ dstRemote \- a path within that remote eg \[lq]file2.txt\[rq] for the destination .PP Authentication is required for this call. -.SS operations/copyurl: Copy the URL to the object +.SS operations/copyurl: Copy the URL to the object {#operations/copyurl} .PP This takes the following parameters .IP \[bu] 2 @@ -7692,7 +8501,7 @@ See the copyurl command (https://rclone.org/commands/rclone_copyurl/) command for more information on the above. .PP Authentication is required for this call. -.SS operations/delete: Remove files in the path +.SS operations/delete: Remove files in the path {#operations/delete} .PP This takes the following parameters .IP \[bu] 2 @@ -7703,6 +8512,7 @@ command for more information on the above. .PP Authentication is required for this call. .SS operations/deletefile: Remove the single file pointed to +{#operations/deletefile} .PP This takes the following parameters .IP \[bu] 2 @@ -7716,6 +8526,7 @@ more information on the above. .PP Authentication is required for this call. .SS operations/fsinfo: Return information about the remote +{#operations/fsinfo} .PP This takes the following parameters .IP \[bu] 2 @@ -7779,6 +8590,7 @@ rclone\ rc\ \-\-loopback\ operations/fsinfo\ fs=remote: \f[] .fi .SS operations/list: List the given remote and path in JSON format +{#operations/list} .PP This takes the following parameters .IP \[bu] 2 @@ -7813,6 +8625,7 @@ more information on the above and examples. .PP Authentication is required for this call. .SS operations/mkdir: Make a destination directory or container +{#operations/mkdir} .PP This takes the following parameters .IP \[bu] 2 @@ -7825,7 +8638,7 @@ command for more information on the above. .PP Authentication is required for this call. .SS operations/movefile: Move a file from source remote to destination -remote +remote {#operations/movefile} .PP This takes the following parameters .IP \[bu] 2 @@ -7841,7 +8654,7 @@ destination .PP Authentication is required for this call. .SS operations/publiclink: Create or retrieve a public link to the given -file or folder. +file or folder. {#operations/publiclink} .PP This takes the following parameters .IP \[bu] 2 @@ -7858,7 +8671,7 @@ for more information on the above. .PP Authentication is required for this call. .SS operations/purge: Remove a directory or container and all of its -contents +contents {#operations/purge} .PP This takes the following parameters .IP \[bu] 2 @@ -7871,6 +8684,7 @@ command for more information on the above. .PP Authentication is required for this call. .SS operations/rmdir: Remove an empty directory or container +{#operations/rmdir} .PP This takes the following parameters .IP \[bu] 2 @@ -7883,6 +8697,7 @@ command for more information on the above. .PP Authentication is required for this call. .SS operations/rmdirs: Remove all the empty directories in the path +{#operations/rmdirs} .PP This takes the following parameters .IP \[bu] 2 @@ -7897,6 +8712,7 @@ command for more information on the above. .PP Authentication is required for this call. .SS operations/size: Count the number of bytes and files in remote +{#operations/size} .PP This takes the following parameters .IP \[bu] 2 @@ -7912,17 +8728,17 @@ See the size command (https://rclone.org/commands/rclone_size/) command for more information on the above. .PP Authentication is required for this call. -.SS options/blocks: List all the option blocks +.SS options/blocks: List all the option blocks {#options/blocks} .PP Returns \- options \- a list of the options block names -.SS options/get: Get all the options +.SS options/get: Get all the options {#options/get} .PP Returns an object where keys are option block names and values are an object with the current option values in. .PP This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. -.SS options/set: Set an option +.SS options/set: Set an option {#options/set} .PP Parameters .IP \[bu] 2 @@ -7963,21 +8779,22 @@ And this sets NOTICE level logs (normal without \-v) rclone\ rc\ options/set\ \-\-json\ \[aq]{"main":\ {"LogLevel":\ 6}}\[aq] \f[] .fi -.SS rc/error: This returns an error +.SS rc/error: This returns an error {#rc/error} .PP This returns an error with the input as part of its error string. Useful for testing error handling. -.SS rc/list: List all the registered remote control commands +.SS rc/list: List all the registered remote control commands {#rc/list} .PP This lists all the registered remote control commands as a JSON map in the commands response. -.SS rc/noop: Echo the input to the output parameters +.SS rc/noop: Echo the input to the output parameters {#rc/noop} .PP This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. .SS rc/noopauth: Echo the input to the output parameters requiring auth +{#rc/noopauth} .PP This echoes the input parameters to the output parameters for testing purposes. @@ -7986,6 +8803,7 @@ parameter passing is working properly. .PP Authentication is required for this call. .SS sync/copy: copy a directory from source remote to destination remote +{#sync/copy} .PP This takes the following parameters .IP \[bu] 2 @@ -7998,6 +8816,7 @@ for more information on the above. .PP Authentication is required for this call. .SS sync/move: move a directory from source remote to destination remote +{#sync/move} .PP This takes the following parameters .IP \[bu] 2 @@ -8012,6 +8831,7 @@ for more information on the above. .PP Authentication is required for this call. .SS sync/sync: sync a directory from source remote to destination remote +{#sync/sync} .PP This takes the following parameters .IP \[bu] 2 @@ -8024,6 +8844,7 @@ for more information on the above. .PP Authentication is required for this call. .SS vfs/forget: Forget files or directories in the directory cache. +{#vfs/forget} .PP This forgets the paths in the directory cache causing them to be re\-read from the remote when needed. @@ -8047,7 +8868,7 @@ rclone\ rc\ vfs/forget\ file=hello\ file2=goodbye\ dir=home/junk \f[] .fi .SS vfs/poll\-interval: Get the status or update the value of the -poll\-interval option. +poll\-interval option. {#vfs/poll\-interval} .PP Without any parameter given this returns the current status of the poll\-interval setting. @@ -8071,7 +8892,7 @@ reached. .PP If poll\-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote. -.SS vfs/refresh: Refresh the directory cache. +.SS vfs/refresh: Refresh the directory cache. {#vfs/refresh} .PP This reads the directories for the specified paths and freshens the directory cache. @@ -8373,6 +9194,19 @@ MIME Type T} _ T{ +1Fichier +T}@T{ +Whirlpool +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +R +T} +T{ Amazon Drive T}@T{ MD5 @@ -8477,6 +9311,19 @@ T}@T{ R/W T} T{ +Google Photos +T}@T{ +\- +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +R +T} +T{ HTTP T}@T{ \- @@ -8607,6 +9454,32 @@ T}@T{ W T} T{ +premiumize.me +T}@T{ +\- +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T}@T{ +R +T} +T{ +put.io +T}@T{ +CRC\-32 +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +R +T} +T{ QingStor T}@T{ MD5 @@ -8764,7 +9637,7 @@ more efficient. .PP .TS tab(@); -l c c c c c c c c c. +l c c c c c c c c c c. T{ Name T}@T{ @@ -8785,9 +9658,34 @@ T}@T{ LinkSharing T}@T{ About +T}@T{ +EmptyDir T} _ T{ +1Fichier +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T} +T{ Amazon Drive T}@T{ Yes @@ -8807,6 +9705,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ No +T}@T{ +Yes T} T{ Amazon S3 @@ -8828,6 +9728,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ No +T}@T{ +No T} T{ Backblaze B2 @@ -8846,7 +9748,9 @@ Yes T}@T{ Yes T}@T{ -No #2178 (https://github.com/rclone/rclone/issues/2178) +Yes +T}@T{ +No T}@T{ No T} @@ -8870,6 +9774,8 @@ T}@T{ Yes T}@T{ No +T}@T{ +Yes T} T{ Dropbox @@ -8891,6 +9797,8 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +Yes T} T{ FTP @@ -8912,6 +9820,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ No +T}@T{ +Yes T} T{ Google Cloud Storage @@ -8933,6 +9843,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ No +T}@T{ +No T} T{ Google Drive @@ -8954,6 +9866,31 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +Yes +T} +T{ +Google Photos +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No T} T{ HTTP @@ -8975,6 +9912,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ No +T}@T{ +Yes T} T{ Hubic @@ -8996,6 +9935,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ Yes +T}@T{ +No T} T{ Jottacloud @@ -9017,6 +9958,8 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +Yes T} T{ Mega @@ -9038,6 +9981,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ Yes +T}@T{ +Yes T} T{ Microsoft Azure Blob Storage @@ -9059,6 +10004,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ No +T}@T{ +No T} T{ Microsoft OneDrive @@ -9080,6 +10027,8 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +Yes T} T{ OpenDrive @@ -9101,6 +10050,8 @@ T}@T{ No T}@T{ No +T}@T{ +Yes T} T{ Openstack Swift @@ -9122,6 +10073,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ Yes +T}@T{ +No T} T{ pCloud @@ -9143,6 +10096,54 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ Yes +T}@T{ +Yes +T} +T{ +premiumize.me +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T} +T{ +put.io +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +No #2178 (https://github.com/rclone/rclone/issues/2178) +T}@T{ +Yes +T}@T{ +Yes T} T{ QingStor @@ -9164,6 +10165,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ No +T}@T{ +No T} T{ SFTP @@ -9185,6 +10188,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ Yes +T}@T{ +Yes T} T{ WebDAV @@ -9206,6 +10211,8 @@ T}@T{ No #2178 (https://github.com/rclone/rclone/issues/2178) T}@T{ Yes +T}@T{ +Yes T} T{ Yandex Disk @@ -9227,6 +10234,8 @@ T}@T{ Yes T}@T{ Yes +T}@T{ +Yes T} T{ The local filesystem @@ -9248,6 +10257,8 @@ T}@T{ No T}@T{ Yes +T}@T{ +Yes T} .TE .SS Purge @@ -9323,6 +10334,508 @@ This is also used to return the space used, available for .PP If the server can't do \f[C]About\f[] then \f[C]rclone\ about\f[] will return an error. +.SS EmptyDir +.PP +The remote supports empty directories. +See Limitations (/bugs/#limitations) for details. +Most Object/Bucket based remotes do not support this. +.SH Global Flags +.PP +This describes the global flags available to every rclone command split +into two groups, non backend and backend flags. +.SS Non Backend Flags +.PP +These flags are available for every command. +.IP +.nf +\f[C] +\ \ \ \ \ \ \-\-ask\-password\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ prompt\ for\ password\ for\ encrypted\ configuration.\ (default\ true) +\ \ \ \ \ \ \-\-auto\-confirm\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ enabled,\ do\ not\ request\ console\ confirmation. +\ \ \ \ \ \ \-\-backup\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Make\ backups\ into\ hierarchy\ based\ in\ DIR. +\ \ \ \ \ \ \-\-bind\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Local\ address\ to\ bind\ to\ for\ outgoing\ connections,\ IPv4,\ IPv6\ or\ name. +\ \ \ \ \ \ \-\-buffer\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ In\ memory\ buffer\ size\ when\ reading\ files\ for\ each\ \-\-transfer.\ (default\ 16M) +\ \ \ \ \ \ \-\-bwlimit\ BwTimetable\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Bandwidth\ limit\ in\ kBytes/s,\ or\ use\ suffix\ b|k|M|G\ or\ a\ full\ timetable. +\ \ \ \ \ \ \-\-ca\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ CA\ certificate\ used\ to\ verify\ servers +\ \ \ \ \ \ \-\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.\ (default\ "$HOME/.cache/rclone") +\ \ \ \ \ \ \-\-checkers\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Number\ of\ checkers\ to\ run\ in\ parallel.\ (default\ 8) +\ \ \-c,\ \-\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ based\ on\ checksum\ (if\ available)\ &\ size,\ not\ mod\-time\ &\ size +\ \ \ \ \ \ \-\-client\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ SSL\ certificate\ (PEM)\ for\ mutual\ TLS\ auth +\ \ \ \ \ \ \-\-client\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ SSL\ private\ key\ (PEM)\ for\ mutual\ TLS\ auth +\ \ \ \ \ \ \-\-compare\-dest\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ use\ DIR\ to\ server\ side\ copy\ flies\ from. +\ \ \ \ \ \ \-\-config\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Config\ file.\ (default\ "$HOME/.config/rclone/rclone.conf") +\ \ \ \ \ \ \-\-contimeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Connect\ timeout\ (default\ 1m0s) +\ \ \ \ \ \ \-\-copy\-dest\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Compare\ dest\ to\ DIR\ also. +\ \ \ \ \ \ \-\-cpuprofile\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Write\ cpu\ profile\ to\ file +\ \ \ \ \ \ \-\-delete\-after\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ When\ synchronizing,\ delete\ files\ on\ destination\ after\ transferring\ (default) +\ \ \ \ \ \ \-\-delete\-before\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ When\ synchronizing,\ delete\ files\ on\ destination\ before\ transferring +\ \ \ \ \ \ \-\-delete\-during\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ When\ synchronizing,\ delete\ files\ during\ transfer +\ \ \ \ \ \ \-\-delete\-excluded\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Delete\ files\ on\ dest\ excluded\ from\ sync +\ \ \ \ \ \ \-\-disable\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Disable\ a\ comma\ separated\ list\ of\ features.\ \ Use\ help\ to\ see\ a\ list. +\ \ \-n,\ \-\-dry\-run\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Do\ a\ trial\ run\ with\ no\ permanent\ changes +\ \ \ \ \ \ \-\-dump\ DumpFlags\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ List\ of\ items\ to\ dump\ from:\ headers,bodies,requests,responses,auth,filters,goroutines,openfiles +\ \ \ \ \ \ \-\-dump\-bodies\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Dump\ HTTP\ headers\ and\ bodies\ \-\ may\ contain\ sensitive\ info +\ \ \ \ \ \ \-\-dump\-headers\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Dump\ HTTP\ headers\ \-\ may\ contain\ sensitive\ info +\ \ \ \ \ \ \-\-exclude\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Exclude\ files\ matching\ pattern +\ \ \ \ \ \ \-\-exclude\-from\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ Read\ exclude\ patterns\ from\ file +\ \ \ \ \ \ \-\-exclude\-if\-present\ string\ \ \ \ \ \ \ \ \ \ \ \ Exclude\ directories\ if\ filename\ is\ present +\ \ \ \ \ \ \-\-fast\-list\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ recursive\ list\ if\ available.\ Uses\ more\ memory\ but\ fewer\ transactions. +\ \ \ \ \ \ \-\-files\-from\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Read\ list\ of\ source\-file\ names\ from\ file +\ \ \-f,\ \-\-filter\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Add\ a\ file\-filtering\ rule +\ \ \ \ \ \ \-\-filter\-from\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ Read\ filtering\ patterns\ from\ a\ file +\ \ \ \ \ \ \-\-ignore\-case\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Ignore\ case\ in\ filters\ (case\ insensitive) +\ \ \ \ \ \ \-\-ignore\-case\-sync\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Ignore\ case\ when\ synchronizing +\ \ \ \ \ \ \-\-ignore\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ post\ copy\ check\ of\ checksums. +\ \ \ \ \ \ \-\-ignore\-errors\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ delete\ even\ if\ there\ are\ I/O\ errors +\ \ \ \ \ \ \-\-ignore\-existing\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ all\ files\ that\ exist\ on\ destination +\ \ \ \ \ \ \-\-ignore\-size\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Ignore\ size\ when\ skipping\ use\ mod\-time\ or\ checksum. +\ \ \-I,\ \-\-ignore\-times\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ skip\ files\ that\ match\ size\ and\ time\ \-\ transfer\ all\ files +\ \ \ \ \ \ \-\-immutable\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Do\ not\ modify\ files.\ Fail\ if\ existing\ files\ have\ been\ modified. +\ \ \ \ \ \ \-\-include\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Include\ files\ matching\ pattern +\ \ \ \ \ \ \-\-include\-from\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ Read\ include\ patterns\ from\ file +\ \ \ \ \ \ \-\-log\-file\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Log\ everything\ to\ this\ file +\ \ \ \ \ \ \-\-log\-format\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Comma\ separated\ list\ of\ log\ format\ options\ (default\ "date,time") +\ \ \ \ \ \ \-\-log\-level\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Log\ level\ DEBUG|INFO|NOTICE|ERROR\ (default\ "NOTICE") +\ \ \ \ \ \ \-\-low\-level\-retries\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Number\ of\ low\ level\ retries\ to\ do.\ (default\ 10) +\ \ \ \ \ \ \-\-max\-age\ Duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ transfer\ files\ younger\ than\ this\ in\ s\ or\ suffix\ ms|s|m|h|d|w|M|y\ (default\ off) +\ \ \ \ \ \ \-\-max\-backlog\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ number\ of\ objects\ in\ sync\ or\ check\ backlog.\ (default\ 10000) +\ \ \ \ \ \ \-\-max\-delete\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ When\ synchronizing,\ limit\ the\ number\ of\ deletes\ (default\ \-1) +\ \ \ \ \ \ \-\-max\-depth\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ set\ limits\ the\ recursion\ depth\ to\ this.\ (default\ \-1) +\ \ \ \ \ \ \-\-max\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ transfer\ files\ smaller\ than\ this\ in\ k\ or\ suffix\ b|k|M|G\ (default\ off) +\ \ \ \ \ \ \-\-max\-stats\-groups\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ number\ of\ stats\ groups\ to\ keep\ in\ memory.\ On\ max\ oldest\ is\ discarded.\ (default\ 1000) +\ \ \ \ \ \ \-\-max\-transfer\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ data\ to\ transfer.\ (default\ off) +\ \ \ \ \ \ \-\-memprofile\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Write\ memory\ profile\ to\ file +\ \ \ \ \ \ \-\-min\-age\ Duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ transfer\ files\ older\ than\ this\ in\ s\ or\ suffix\ ms|s|m|h|d|w|M|y\ (default\ off) +\ \ \ \ \ \ \-\-min\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ transfer\ files\ bigger\ than\ this\ in\ k\ or\ suffix\ b|k|M|G\ (default\ off) +\ \ \ \ \ \ \-\-modify\-window\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Max\ time\ diff\ to\ be\ considered\ the\ same\ (default\ 1ns) +\ \ \ \ \ \ \-\-multi\-thread\-cutoff\ SizeSuffix\ \ \ \ \ \ \ Use\ multi\-thread\ downloads\ for\ files\ above\ this\ size.\ (default\ 250M) +\ \ \ \ \ \ \-\-multi\-thread\-streams\ int\ \ \ \ \ \ \ \ \ \ \ \ \ Max\ number\ of\ streams\ to\ use\ for\ multi\-thread\ downloads.\ (default\ 4) +\ \ \ \ \ \ \-\-no\-check\-certificate\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Do\ not\ verify\ the\ server\ SSL\ certificate.\ Insecure. +\ \ \ \ \ \ \-\-no\-gzip\-encoding\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ set\ Accept\-Encoding:\ gzip. +\ \ \ \ \ \ \-\-no\-traverse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ traverse\ destination\ file\ system\ on\ copy. +\ \ \ \ \ \ \-\-no\-update\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ update\ destination\ mod\-time\ if\ files\ identical. +\ \ \-P,\ \-\-progress\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Show\ progress\ during\ transfer. +\ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Print\ as\ little\ stuff\ as\ possible +\ \ \ \ \ \ \-\-rc\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Enable\ the\ remote\ control\ server. +\ \ \ \ \ \ \-\-rc\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:5572") +\ \ \ \ \ \ \-\-rc\-allow\-origin\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ the\ allowed\ origin\ for\ CORS. +\ \ \ \ \ \ \-\-rc\-baseurl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Prefix\ for\ URLs\ \-\ leave\ blank\ for\ root. +\ \ \ \ \ \ \-\-rc\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate) +\ \ \ \ \ \ \-\-rc\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with +\ \ \ \ \ \ \-\-rc\-files\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Path\ to\ local\ files\ to\ serve\ on\ the\ HTTP\ server. +\ \ \ \ \ \ \-\-rc\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done +\ \ \ \ \ \ \-\-rc\-job\-expire\-duration\ duration\ \ \ \ \ \ expire\ finished\ async\ jobs\ older\ than\ this\ value\ (default\ 1m0s) +\ \ \ \ \ \ \-\-rc\-job\-expire\-interval\ duration\ \ \ \ \ \ interval\ to\ check\ for\ expired\ async\ jobs\ (default\ 10s) +\ \ \ \ \ \ \-\-rc\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key +\ \ \ \ \ \ \-\-rc\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096) +\ \ \ \ \ \ \-\-rc\-no\-auth\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ require\ auth\ for\ certain\ methods. +\ \ \ \ \ \ \-\-rc\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication. +\ \ \ \ \ \ \-\-rc\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone") +\ \ \ \ \ \ \-\-rc\-serve\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Enable\ the\ serving\ of\ remote\ objects. +\ \ \ \ \ \ \-\-rc\-server\-read\-timeout\ duration\ \ \ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-rc\-server\-write\-timeout\ duration\ \ \ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s) +\ \ \ \ \ \ \-\-rc\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication. +\ \ \ \ \ \ \-\-rc\-web\-fetch\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ URL\ to\ fetch\ the\ releases\ for\ webgui.\ (default\ "https://api.github.com/repos/rclone/rclone\-webui\-react/releases/latest") +\ \ \ \ \ \ \-\-rc\-web\-gui\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Launch\ WebGUI\ on\ localhost +\ \ \ \ \ \ \-\-rc\-web\-gui\-update\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Update\ /\ Force\ update\ to\ latest\ version\ of\ web\ gui +\ \ \ \ \ \ \-\-retries\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Retry\ operations\ this\ many\ times\ if\ they\ fail\ (default\ 3) +\ \ \ \ \ \ \-\-retries\-sleep\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Interval\ between\ retrying\ operations\ if\ they\ fail,\ e.g\ 500ms,\ 60s,\ 5m.\ (0\ to\ disable) +\ \ \ \ \ \ \-\-size\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ based\ on\ size\ only,\ not\ mod\-time\ or\ checksum +\ \ \ \ \ \ \-\-stats\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Interval\ between\ printing\ stats,\ e.g\ 500ms,\ 60s,\ 5m.\ (0\ to\ disable)\ (default\ 1m0s) +\ \ \ \ \ \ \-\-stats\-file\-name\-length\ int\ \ \ \ \ \ \ \ \ \ \ Max\ file\ name\ length\ in\ stats.\ 0\ for\ no\ limit\ (default\ 45) +\ \ \ \ \ \ \-\-stats\-log\-level\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Log\ level\ to\ show\ \-\-stats\ output\ DEBUG|INFO|NOTICE|ERROR\ (default\ "INFO") +\ \ \ \ \ \ \-\-stats\-one\-line\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Make\ the\ stats\ fit\ on\ one\ line. +\ \ \ \ \ \ \-\-stats\-one\-line\-date\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Enables\ \-\-stats\-one\-line\ and\ add\ current\ date/time\ prefix. +\ \ \ \ \ \ \-\-stats\-one\-line\-date\-format\ string\ \ \ \ Enables\ \-\-stats\-one\-line\-date\ and\ uses\ custom\ formatted\ date.\ Enclose\ date\ string\ in\ double\ quotes\ (").\ See\ https://golang.org/pkg/time/#Time.Format +\ \ \ \ \ \ \-\-stats\-unit\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Show\ data\ rate\ in\ stats\ as\ either\ \[aq]bits\[aq]\ or\ \[aq]bytes\[aq]/s\ (default\ "bytes") +\ \ \ \ \ \ \-\-streaming\-upload\-cutoff\ SizeSuffix\ \ \ Cutoff\ for\ switching\ to\ chunked\ upload\ if\ file\ size\ is\ unknown.\ Upload\ starts\ after\ reaching\ cutoff\ or\ when\ file\ ends.\ (default\ 100k) +\ \ \ \ \ \ \-\-suffix\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Suffix\ to\ add\ to\ changed\ files. +\ \ \ \ \ \ \-\-suffix\-keep\-extension\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Preserve\ the\ extension\ when\ using\ \-\-suffix. +\ \ \ \ \ \ \-\-syslog\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ Syslog\ for\ logging +\ \ \ \ \ \ \-\-syslog\-facility\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Facility\ for\ syslog,\ eg\ KERN,USER,...\ (default\ "DAEMON") +\ \ \ \ \ \ \-\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IO\ idle\ timeout\ (default\ 5m0s) +\ \ \ \ \ \ \-\-tpslimit\ float\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Limit\ HTTP\ transactions\ per\ second\ to\ this. +\ \ \ \ \ \ \-\-tpslimit\-burst\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Max\ burst\ of\ transactions\ for\ \-\-tpslimit.\ (default\ 1) +\ \ \ \ \ \ \-\-track\-renames\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ When\ synchronizing,\ track\ file\ renames\ and\ do\ a\ server\ side\ move\ if\ possible +\ \ \ \ \ \ \-\-transfers\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Number\ of\ file\ transfers\ to\ run\ in\ parallel.\ (default\ 4) +\ \ \-u,\ \-\-update\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ files\ that\ are\ newer\ on\ the\ destination. +\ \ \ \ \ \ \-\-use\-cookies\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Enable\ session\ cookiejar. +\ \ \ \ \ \ \-\-use\-json\-log\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ json\ log\ format. +\ \ \ \ \ \ \-\-use\-mmap\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ mmap\ allocator\ (see\ docs). +\ \ \ \ \ \ \-\-use\-server\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ server\ modified\ time\ instead\ of\ object\ metadata +\ \ \ \ \ \ \-\-user\-agent\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ the\ user\-agent\ to\ a\ specified\ string.\ The\ default\ is\ rclone/\ version\ (default\ "rclone/v1.49.0") +\ \ \-v,\ \-\-verbose\ count\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Print\ lots\ more\ stuff\ (repeat\ for\ more) +\f[] +.fi +.SS Backend Flags +.PP +These flags are available for every command. +They control the backends and may be set in the config file. +.IP +.nf +\f[C] +\ \ \ \ \ \ \-\-acd\-auth\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Auth\ server\ URL. +\ \ \ \ \ \ \-\-acd\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Amazon\ Application\ Client\ ID. +\ \ \ \ \ \ \-\-acd\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Amazon\ Application\ Client\ Secret. +\ \ \ \ \ \ \-\-acd\-templink\-threshold\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ Files\ >=\ this\ size\ will\ be\ downloaded\ via\ their\ tempLink.\ (default\ 9G) +\ \ \ \ \ \ \-\-acd\-token\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Token\ server\ url. +\ \ \ \ \ \ \-\-acd\-upload\-wait\-per\-gb\ Duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ Additional\ time\ per\ GB\ to\ wait\ after\ a\ failed\ complete\ upload\ to\ see\ if\ it\ appears.\ (default\ 3m0s) +\ \ \ \ \ \ \-\-alias\-remote\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Remote\ or\ path\ to\ alias. +\ \ \ \ \ \ \-\-azureblob\-access\-tier\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Access\ tier\ of\ blob:\ hot,\ cool\ or\ archive. +\ \ \ \ \ \ \-\-azureblob\-account\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ Account\ Name\ (leave\ blank\ to\ use\ SAS\ URL\ or\ Emulator) +\ \ \ \ \ \ \-\-azureblob\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ Upload\ chunk\ size\ (<=\ 100MB).\ (default\ 4M) +\ \ \ \ \ \ \-\-azureblob\-endpoint\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Endpoint\ for\ the\ service +\ \ \ \ \ \ \-\-azureblob\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ Account\ Key\ (leave\ blank\ to\ use\ SAS\ URL\ or\ Emulator) +\ \ \ \ \ \ \-\-azureblob\-list\-chunk\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Size\ of\ blob\ list.\ (default\ 5000) +\ \ \ \ \ \ \-\-azureblob\-sas\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SAS\ URL\ for\ container\ level\ access\ only +\ \ \ \ \ \ \-\-azureblob\-upload\-cutoff\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ Cutoff\ for\ switching\ to\ chunked\ upload\ (<=\ 256MB).\ (default\ 256M) +\ \ \ \ \ \ \-\-azureblob\-use\-emulator\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Uses\ local\ storage\ emulator\ if\ provided\ as\ \[aq]true\[aq]\ (leave\ blank\ if\ using\ real\ azure\ storage\ endpoint) +\ \ \ \ \ \ \-\-b2\-account\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Account\ ID\ or\ Application\ Key\ ID +\ \ \ \ \ \ \-\-b2\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Upload\ chunk\ size.\ Must\ fit\ in\ memory.\ (default\ 96M) +\ \ \ \ \ \ \-\-b2\-disable\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Disable\ checksums\ for\ large\ (>\ upload\ cutoff)\ files +\ \ \ \ \ \ \-\-b2\-download\-auth\-duration\ Duration\ \ \ \ \ \ \ \ \ \ \ Time\ before\ the\ authorization\ token\ will\ expire\ in\ s\ or\ suffix\ ms|s|m|h|d.\ (default\ 1w) +\ \ \ \ \ \ \-\-b2\-download\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Custom\ endpoint\ for\ downloads. +\ \ \ \ \ \ \-\-b2\-endpoint\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Endpoint\ for\ the\ service. +\ \ \ \ \ \ \-\-b2\-hard\-delete\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Permanently\ delete\ files\ on\ remote\ removal,\ otherwise\ hide\ files. +\ \ \ \ \ \ \-\-b2\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Application\ Key +\ \ \ \ \ \ \-\-b2\-test\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\ flag\ string\ for\ X\-Bz\-Test\-Mode\ header\ for\ debugging. +\ \ \ \ \ \ \-\-b2\-upload\-cutoff\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cutoff\ for\ switching\ to\ chunked\ upload.\ (default\ 200M) +\ \ \ \ \ \ \-\-b2\-versions\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Include\ old\ versions\ in\ directory\ listings. +\ \ \ \ \ \ \-\-box\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Box\ App\ Client\ Id. +\ \ \ \ \ \ \-\-box\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Box\ App\ Client\ Secret +\ \ \ \ \ \ \-\-box\-commit\-retries\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Max\ number\ of\ times\ to\ try\ committing\ a\ multipart\ file.\ (default\ 100) +\ \ \ \ \ \ \-\-box\-upload\-cutoff\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cutoff\ for\ switching\ to\ multipart\ upload\ (>=\ 50MB).\ (default\ 50M) +\ \ \ \ \ \ \-\-cache\-chunk\-clean\-interval\ Duration\ \ \ \ \ \ \ \ \ \ How\ often\ should\ the\ cache\ perform\ cleanups\ of\ the\ chunk\ storage.\ (default\ 1m0s) +\ \ \ \ \ \ \-\-cache\-chunk\-no\-memory\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Disable\ the\ in\-memory\ cache\ for\ storing\ chunks\ during\ streaming. +\ \ \ \ \ \ \-\-cache\-chunk\-path\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ to\ cache\ chunk\ files.\ (default\ "$HOME/.cache/rclone/cache\-backend") +\ \ \ \ \ \ \-\-cache\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ size\ of\ a\ chunk\ (partial\ file\ data).\ (default\ 5M) +\ \ \ \ \ \ \-\-cache\-chunk\-total\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ The\ total\ size\ that\ the\ chunks\ can\ take\ up\ on\ the\ local\ disk.\ (default\ 10G) +\ \ \ \ \ \ \-\-cache\-db\-path\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ to\ store\ file\ structure\ metadata\ DB.\ (default\ "$HOME/.cache/rclone/cache\-backend") +\ \ \ \ \ \ \-\-cache\-db\-purge\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Clear\ all\ the\ cached\ data\ for\ this\ remote\ on\ start. +\ \ \ \ \ \ \-\-cache\-db\-wait\-time\ Duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ How\ long\ to\ wait\ for\ the\ DB\ to\ be\ available\ \-\ 0\ is\ unlimited\ (default\ 1s) +\ \ \ \ \ \ \-\-cache\-info\-age\ Duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ How\ long\ to\ cache\ file\ structure\ information\ (directory\ listings,\ file\ size,\ times\ etc).\ (default\ 6h0m0s) +\ \ \ \ \ \ \-\-cache\-plex\-insecure\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ all\ certificate\ verifications\ when\ connecting\ to\ the\ Plex\ server +\ \ \ \ \ \ \-\-cache\-plex\-password\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ password\ of\ the\ Plex\ user +\ \ \ \ \ \ \-\-cache\-plex\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ URL\ of\ the\ Plex\ server +\ \ \ \ \ \ \-\-cache\-plex\-username\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ username\ of\ the\ Plex\ user +\ \ \ \ \ \ \-\-cache\-read\-retries\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ How\ many\ times\ to\ retry\ a\ read\ from\ a\ cache\ storage.\ (default\ 10) +\ \ \ \ \ \ \-\-cache\-remote\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Remote\ to\ cache. +\ \ \ \ \ \ \-\-cache\-rps\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Limits\ the\ number\ of\ requests\ per\ second\ to\ the\ source\ FS\ (\-1\ to\ disable)\ (default\ \-1) +\ \ \ \ \ \ \-\-cache\-tmp\-upload\-path\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ to\ keep\ temporary\ files\ until\ they\ are\ uploaded. +\ \ \ \ \ \ \-\-cache\-tmp\-wait\-time\ Duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ How\ long\ should\ files\ be\ stored\ in\ local\ cache\ before\ being\ uploaded\ (default\ 15s) +\ \ \ \ \ \ \-\-cache\-workers\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ How\ many\ workers\ should\ run\ in\ parallel\ to\ download\ chunks.\ (default\ 4) +\ \ \ \ \ \ \-\-cache\-writes\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ file\ data\ on\ writes\ through\ the\ FS +\ \ \-L,\ \-\-copy\-links\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Follow\ symlinks\ and\ copy\ the\ pointed\ to\ item. +\ \ \ \ \ \ \-\-crypt\-directory\-name\-encryption\ \ \ \ \ \ \ \ \ \ \ \ \ \ Option\ to\ either\ encrypt\ directory\ names\ or\ leave\ them\ intact.\ (default\ true) +\ \ \ \ \ \ \-\-crypt\-filename\-encryption\ string\ \ \ \ \ \ \ \ \ \ \ \ \ How\ to\ encrypt\ the\ filenames.\ (default\ "standard") +\ \ \ \ \ \ \-\-crypt\-password\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ or\ pass\ phrase\ for\ encryption. +\ \ \ \ \ \ \-\-crypt\-password2\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ or\ pass\ phrase\ for\ salt.\ Optional\ but\ recommended. +\ \ \ \ \ \ \-\-crypt\-remote\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Remote\ to\ encrypt/decrypt. +\ \ \ \ \ \ \-\-crypt\-show\-mapping\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ For\ all\ files\ listed\ show\ how\ the\ names\ encrypt. +\ \ \ \ \ \ \-\-drive\-acknowledge\-abuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ to\ allow\ files\ which\ return\ cannotDownloadAbusiveFile\ to\ be\ downloaded. +\ \ \ \ \ \ \-\-drive\-allow\-import\-name\-change\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ the\ filetype\ to\ change\ when\ uploading\ Google\ docs\ (e.g.\ file.doc\ to\ file.docx).\ This\ will\ confuse\ sync\ and\ reupload\ every\ time. +\ \ \ \ \ \ \-\-drive\-alternate\-export\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ alternate\ export\ URLs\ for\ google\ documents\ export., +\ \ \ \ \ \ \-\-drive\-auth\-owner\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ consider\ files\ owned\ by\ the\ authenticated\ user. +\ \ \ \ \ \ \-\-drive\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Upload\ chunk\ size.\ Must\ a\ power\ of\ 2\ >=\ 256k.\ (default\ 8M) +\ \ \ \ \ \ \-\-drive\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Google\ Application\ Client\ Id +\ \ \ \ \ \ \-\-drive\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Google\ Application\ Client\ Secret +\ \ \ \ \ \ \-\-drive\-export\-formats\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Comma\ separated\ list\ of\ preferred\ formats\ for\ downloading\ Google\ docs.\ (default\ "docx,xlsx,pptx,svg") +\ \ \ \ \ \ \-\-drive\-formats\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Deprecated:\ see\ export_formats +\ \ \ \ \ \ \-\-drive\-impersonate\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Impersonate\ this\ user\ when\ using\ a\ service\ account. +\ \ \ \ \ \ \-\-drive\-import\-formats\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Comma\ separated\ list\ of\ preferred\ formats\ for\ uploading\ Google\ docs. +\ \ \ \ \ \ \-\-drive\-keep\-revision\-forever\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Keep\ new\ head\ revision\ of\ each\ file\ forever. +\ \ \ \ \ \ \-\-drive\-list\-chunk\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Size\ of\ listing\ chunk\ 100\-1000.\ 0\ to\ disable.\ (default\ 1000) +\ \ \ \ \ \ \-\-drive\-pacer\-burst\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Number\ of\ API\ calls\ to\ allow\ without\ sleeping.\ (default\ 100) +\ \ \ \ \ \ \-\-drive\-pacer\-min\-sleep\ Duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Minimum\ time\ to\ sleep\ between\ API\ calls.\ (default\ 100ms) +\ \ \ \ \ \ \-\-drive\-root\-folder\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ID\ of\ the\ root\ folder +\ \ \ \ \ \ \-\-drive\-scope\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Scope\ that\ rclone\ should\ use\ when\ requesting\ access\ from\ drive. +\ \ \ \ \ \ \-\-drive\-server\-side\-across\-configs\ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ server\ side\ operations\ (eg\ copy)\ to\ work\ across\ different\ drive\ configs. +\ \ \ \ \ \ \-\-drive\-service\-account\-credentials\ string\ \ \ \ \ Service\ Account\ Credentials\ JSON\ blob +\ \ \ \ \ \ \-\-drive\-service\-account\-file\ string\ \ \ \ \ \ \ \ \ \ \ \ Service\ Account\ Credentials\ JSON\ file\ path +\ \ \ \ \ \ \-\-drive\-shared\-with\-me\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ show\ files\ that\ are\ shared\ with\ me. +\ \ \ \ \ \ \-\-drive\-size\-as\-quota\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Show\ storage\ quota\ usage\ for\ file\ size. +\ \ \ \ \ \ \-\-drive\-skip\-checksum\-gphotos\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ MD5\ checksum\ on\ Google\ photos\ and\ videos\ only. +\ \ \ \ \ \ \-\-drive\-skip\-gdocs\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Skip\ google\ documents\ in\ all\ listings. +\ \ \ \ \ \ \-\-drive\-team\-drive\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ID\ of\ the\ Team\ Drive +\ \ \ \ \ \ \-\-drive\-trashed\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ show\ files\ that\ are\ in\ the\ trash. +\ \ \ \ \ \ \-\-drive\-upload\-cutoff\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cutoff\ for\ switching\ to\ chunked\ upload\ (default\ 8M) +\ \ \ \ \ \ \-\-drive\-use\-created\-date\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ file\ created\ date\ instead\ of\ modified\ date., +\ \ \ \ \ \ \-\-drive\-use\-trash\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Send\ files\ to\ the\ trash\ instead\ of\ deleting\ permanently.\ (default\ true) +\ \ \ \ \ \ \-\-drive\-v2\-download\-min\-size\ SizeSuffix\ \ \ \ \ \ \ \ If\ Object\[aq]s\ are\ greater,\ use\ drive\ v2\ API\ to\ download.\ (default\ off) +\ \ \ \ \ \ \-\-dropbox\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Upload\ chunk\ size.\ (<\ 150M).\ (default\ 48M) +\ \ \ \ \ \ \-\-dropbox\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Dropbox\ App\ Client\ Id +\ \ \ \ \ \ \-\-dropbox\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Dropbox\ App\ Client\ Secret +\ \ \ \ \ \ \-\-dropbox\-impersonate\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Impersonate\ this\ user\ when\ using\ a\ business\ account. +\ \ \ \ \ \ \-\-fichier\-api\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Your\ API\ Key,\ get\ it\ from\ https://1fichier.com/console/params.pl +\ \ \ \ \ \ \-\-fichier\-shared\-folder\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ you\ want\ to\ download\ a\ shared\ folder,\ add\ this\ parameter +\ \ \ \ \ \ \-\-ftp\-concurrency\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ number\ of\ FTP\ simultaneous\ connections,\ 0\ for\ unlimited +\ \ \ \ \ \ \-\-ftp\-host\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ FTP\ host\ to\ connect\ to +\ \ \ \ \ \ \-\-ftp\-no\-check\-certificate\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Do\ not\ verify\ the\ TLS\ certificate\ of\ the\ server +\ \ \ \ \ \ \-\-ftp\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ FTP\ password +\ \ \ \ \ \ \-\-ftp\-port\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ FTP\ port,\ leave\ blank\ to\ use\ default\ (21) +\ \ \ \ \ \ \-\-ftp\-tls\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Use\ FTP\ over\ TLS\ (Implicit) +\ \ \ \ \ \ \-\-ftp\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ FTP\ username,\ leave\ blank\ for\ current\ username,\ $USER +\ \ \ \ \ \ \-\-gcs\-bucket\-acl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Access\ Control\ List\ for\ new\ buckets. +\ \ \ \ \ \ \-\-gcs\-bucket\-policy\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Access\ checks\ should\ use\ bucket\-level\ IAM\ policies. +\ \ \ \ \ \ \-\-gcs\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Google\ Application\ Client\ Id +\ \ \ \ \ \ \-\-gcs\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Google\ Application\ Client\ Secret +\ \ \ \ \ \ \-\-gcs\-location\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Location\ for\ the\ newly\ created\ buckets. +\ \ \ \ \ \ \-\-gcs\-object\-acl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Access\ Control\ List\ for\ new\ objects. +\ \ \ \ \ \ \-\-gcs\-project\-number\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Project\ number. +\ \ \ \ \ \ \-\-gcs\-service\-account\-file\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Service\ Account\ Credentials\ JSON\ file\ path +\ \ \ \ \ \ \-\-gcs\-storage\-class\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ Google\ Cloud\ Storage. +\ \ \ \ \ \ \-\-gphotos\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Google\ Application\ Client\ Id +\ \ \ \ \ \ \-\-gphotos\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Google\ Application\ Client\ Secret +\ \ \ \ \ \ \-\-gphotos\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ to\ make\ the\ Google\ Photos\ backend\ read\ only. +\ \ \ \ \ \ \-\-gphotos\-read\-size\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ to\ read\ the\ size\ of\ media\ items. +\ \ \ \ \ \ \-\-http\-headers\ CommaSepList\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ HTTP\ headers\ for\ all\ transactions +\ \ \ \ \ \ \-\-http\-no\-slash\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ this\ if\ the\ site\ doesn\[aq]t\ end\ directories\ with\ / +\ \ \ \ \ \ \-\-http\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ http\ host\ to\ connect\ to +\ \ \ \ \ \ \-\-hubic\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Above\ this\ size\ files\ will\ be\ chunked\ into\ a\ _segments\ container.\ (default\ 5G) +\ \ \ \ \ \ \-\-hubic\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Hubic\ Client\ Id +\ \ \ \ \ \ \-\-hubic\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Hubic\ Client\ Secret +\ \ \ \ \ \ \-\-hubic\-no\-chunk\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ chunk\ files\ during\ streaming\ upload. +\ \ \ \ \ \ \-\-jottacloud\-hard\-delete\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Delete\ files\ permanently\ rather\ than\ putting\ them\ into\ the\ trash. +\ \ \ \ \ \ \-\-jottacloud\-md5\-memory\-limit\ SizeSuffix\ \ \ \ \ \ \ Files\ bigger\ than\ this\ will\ be\ cached\ on\ disk\ to\ calculate\ the\ MD5\ if\ required.\ (default\ 10M) +\ \ \ \ \ \ \-\-jottacloud\-unlink\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Remove\ existing\ public\ link\ to\ file/folder\ with\ link\ command\ rather\ than\ creating. +\ \ \ \ \ \ \-\-jottacloud\-upload\-resume\-limit\ SizeSuffix\ \ \ \ Files\ bigger\ than\ this\ can\ be\ resumed\ if\ the\ upload\ fail\[aq]s.\ (default\ 10M) +\ \ \ \ \ \ \-\-koofr\-endpoint\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ Koofr\ API\ endpoint\ to\ use\ (default\ "https://app.koofr.net") +\ \ \ \ \ \ \-\-koofr\-mountid\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ ID\ of\ the\ mount\ to\ use.\ If\ omitted,\ the\ primary\ mount\ is\ used. +\ \ \ \ \ \ \-\-koofr\-password\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Your\ Koofr\ password\ for\ rclone\ (generate\ one\ at\ https://app.koofr.net/app/admin/preferences/password) +\ \ \ \ \ \ \-\-koofr\-setmtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Does\ the\ backend\ support\ setting\ modification\ time.\ Set\ this\ to\ false\ if\ you\ use\ a\ mount\ ID\ that\ points\ to\ a\ Dropbox\ or\ Amazon\ Drive\ backend.\ (default\ true) +\ \ \ \ \ \ \-\-koofr\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Your\ Koofr\ user\ name +\ \ \-l,\ \-\-links\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Translate\ symlinks\ to/from\ regular\ files\ with\ a\ \[aq].rclonelink\[aq]\ extension +\ \ \ \ \ \ \-\-local\-case\-insensitive\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Force\ the\ filesystem\ to\ report\ itself\ as\ case\ insensitive +\ \ \ \ \ \ \-\-local\-case\-sensitive\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Force\ the\ filesystem\ to\ report\ itself\ as\ case\ sensitive. +\ \ \ \ \ \ \-\-local\-no\-check\-updated\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ check\ to\ see\ if\ the\ files\ change\ during\ upload +\ \ \ \ \ \ \-\-local\-no\-unicode\-normalization\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ apply\ unicode\ normalization\ to\ paths\ and\ filenames\ (Deprecated) +\ \ \ \ \ \ \-\-local\-nounc\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Disable\ UNC\ (long\ path\ names)\ conversion\ on\ Windows +\ \ \ \ \ \ \-\-mega\-debug\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Output\ more\ debug\ from\ Mega. +\ \ \ \ \ \ \-\-mega\-hard\-delete\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Delete\ files\ permanently\ rather\ than\ putting\ them\ into\ the\ trash. +\ \ \ \ \ \ \-\-mega\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password. +\ \ \ \ \ \ \-\-mega\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name +\ \ \-x,\ \-\-one\-file\-system\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ cross\ filesystem\ boundaries\ (unix/macOS\ only). +\ \ \ \ \ \ \-\-onedrive\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Chunk\ size\ to\ upload\ files\ with\ \-\ must\ be\ multiple\ of\ 320k.\ (default\ 10M) +\ \ \ \ \ \ \-\-onedrive\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Microsoft\ App\ Client\ Id +\ \ \ \ \ \ \-\-onedrive\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Microsoft\ App\ Client\ Secret +\ \ \ \ \ \ \-\-onedrive\-drive\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ ID\ of\ the\ drive\ to\ use +\ \ \ \ \ \ \-\-onedrive\-drive\-type\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ type\ of\ the\ drive\ (\ personal\ |\ business\ |\ documentLibrary\ ) +\ \ \ \ \ \ \-\-onedrive\-expose\-onenote\-files\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ to\ make\ OneNote\ files\ show\ up\ in\ directory\ listings. +\ \ \ \ \ \ \-\-opendrive\-password\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password. +\ \ \ \ \ \ \-\-opendrive\-username\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Username +\ \ \ \ \ \ \-\-pcloud\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Pcloud\ App\ Client\ Id +\ \ \ \ \ \ \-\-pcloud\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Pcloud\ App\ Client\ Secret +\ \ \ \ \ \ \-\-qingstor\-access\-key\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ QingStor\ Access\ Key\ ID +\ \ \ \ \ \ \-\-qingstor\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Chunk\ size\ to\ use\ for\ uploading.\ (default\ 4M) +\ \ \ \ \ \ \-\-qingstor\-connection\-retries\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ Number\ of\ connection\ retries.\ (default\ 3) +\ \ \ \ \ \ \-\-qingstor\-endpoint\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Enter\ a\ endpoint\ URL\ to\ connection\ QingStor\ API. +\ \ \ \ \ \ \-\-qingstor\-env\-auth\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Get\ QingStor\ credentials\ from\ runtime.\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. +\ \ \ \ \ \ \-\-qingstor\-secret\-access\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ QingStor\ Secret\ Access\ Key\ (password) +\ \ \ \ \ \ \-\-qingstor\-upload\-concurrency\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ Concurrency\ for\ multipart\ uploads.\ (default\ 1) +\ \ \ \ \ \ \-\-qingstor\-upload\-cutoff\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ Cutoff\ for\ switching\ to\ chunked\ upload\ (default\ 200M) +\ \ \ \ \ \ \-\-qingstor\-zone\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Zone\ to\ connect\ to. +\ \ \ \ \ \ \-\-s3\-access\-key\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ AWS\ Access\ Key\ ID. +\ \ \ \ \ \ \-\-s3\-acl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Canned\ ACL\ used\ when\ creating\ buckets\ and\ storing\ or\ copying\ objects. +\ \ \ \ \ \ \-\-s3\-bucket\-acl\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Canned\ ACL\ used\ when\ creating\ buckets. +\ \ \ \ \ \ \-\-s3\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Chunk\ size\ to\ use\ for\ uploading.\ (default\ 5M) +\ \ \ \ \ \ \-\-s3\-disable\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ store\ MD5\ checksum\ with\ object\ metadata +\ \ \ \ \ \ \-\-s3\-endpoint\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Endpoint\ for\ S3\ API. +\ \ \ \ \ \ \-\-s3\-env\-auth\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars). +\ \ \ \ \ \ \-\-s3\-force\-path\-style\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ true\ use\ path\ style\ access\ if\ false\ use\ virtual\ hosted\ style.\ (default\ true) +\ \ \ \ \ \ \-\-s3\-location\-constraint\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region. +\ \ \ \ \ \ \-\-s3\-provider\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Choose\ your\ S3\ provider. +\ \ \ \ \ \ \-\-s3\-region\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Region\ to\ connect\ to. +\ \ \ \ \ \ \-\-s3\-secret\-access\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ AWS\ Secret\ Access\ Key\ (password) +\ \ \ \ \ \ \-\-s3\-server\-side\-encryption\ string\ \ \ \ \ \ \ \ \ \ \ \ \ The\ server\-side\ encryption\ algorithm\ used\ when\ storing\ this\ object\ in\ S3. +\ \ \ \ \ \ \-\-s3\-session\-token\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ An\ AWS\ session\ token +\ \ \ \ \ \ \-\-s3\-sse\-kms\-key\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ using\ KMS\ ID\ you\ must\ provide\ the\ ARN\ of\ Key. +\ \ \ \ \ \ \-\-s3\-storage\-class\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ storage\ class\ to\ use\ when\ storing\ new\ objects\ in\ S3. +\ \ \ \ \ \ \-\-s3\-upload\-concurrency\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concurrency\ for\ multipart\ uploads.\ (default\ 4) +\ \ \ \ \ \ \-\-s3\-upload\-cutoff\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Cutoff\ for\ switching\ to\ chunked\ upload\ (default\ 200M) +\ \ \ \ \ \ \-\-s3\-use\-accelerate\-endpoint\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ true\ use\ the\ AWS\ S3\ accelerated\ endpoint. +\ \ \ \ \ \ \-\-s3\-v2\-auth\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ true\ use\ v2\ authentication. +\ \ \ \ \ \ \-\-sftp\-ask\-password\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ asking\ for\ SFTP\ password\ when\ needed. +\ \ \ \ \ \ \-\-sftp\-disable\-hashcheck\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Disable\ the\ execution\ of\ SSH\ commands\ to\ determine\ if\ remote\ file\ hashing\ is\ available. +\ \ \ \ \ \ \-\-sftp\-host\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSH\ host\ to\ connect\ to +\ \ \ \ \ \ \-\-sftp\-key\-file\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Path\ to\ PEM\-encoded\ private\ key\ file,\ leave\ blank\ or\ set\ key\-use\-agent\ to\ use\ ssh\-agent. +\ \ \ \ \ \ \-\-sftp\-key\-file\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ passphrase\ to\ decrypt\ the\ PEM\-encoded\ private\ key\ file. +\ \ \ \ \ \ \-\-sftp\-key\-use\-agent\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ When\ set\ forces\ the\ usage\ of\ the\ ssh\-agent. +\ \ \ \ \ \ \-\-sftp\-md5sum\-command\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ command\ used\ to\ read\ md5\ hashes.\ Leave\ blank\ for\ autodetect. +\ \ \ \ \ \ \-\-sftp\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent. +\ \ \ \ \ \ \-\-sftp\-path\-override\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ path\ used\ by\ SSH\ connection. +\ \ \ \ \ \ \-\-sftp\-port\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSH\ port,\ leave\ blank\ to\ use\ default\ (22) +\ \ \ \ \ \ \-\-sftp\-set\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Set\ the\ modified\ time\ on\ the\ remote\ if\ set.\ (default\ true) +\ \ \ \ \ \ \-\-sftp\-sha1sum\-command\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ command\ used\ to\ read\ sha1\ hashes.\ Leave\ blank\ for\ autodetect. +\ \ \ \ \ \ \-\-sftp\-use\-insecure\-cipher\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Enable\ the\ use\ of\ the\ aes128\-cbc\ cipher\ and\ diffie\-hellman\-group\-exchange\-sha256,\ diffie\-hellman\-group\-exchange\-sha1\ key\ exchange.\ Those\ algorithms\ are\ insecure\ and\ may\ allow\ plaintext\ data\ to\ be\ recovered\ by\ an\ attacker. +\ \ \ \ \ \ \-\-sftp\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSH\ username,\ leave\ blank\ for\ current\ username,\ ncw +\ \ \ \ \ \ \-\-skip\-links\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ warn\ about\ skipped\ symlinks. +\ \ \ \ \ \ \-\-swift\-application\-credential\-id\ string\ \ \ \ \ \ \ Application\ Credential\ ID\ (OS_APPLICATION_CREDENTIAL_ID) +\ \ \ \ \ \ \-\-swift\-application\-credential\-name\ string\ \ \ \ \ Application\ Credential\ Name\ (OS_APPLICATION_CREDENTIAL_NAME) +\ \ \ \ \ \ \-\-swift\-application\-credential\-secret\ string\ \ \ Application\ Credential\ Secret\ (OS_APPLICATION_CREDENTIAL_SECRET) +\ \ \ \ \ \ \-\-swift\-auth\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Authentication\ URL\ for\ server\ (OS_AUTH_URL). +\ \ \ \ \ \ \-\-swift\-auth\-token\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Auth\ Token\ from\ alternate\ authentication\ \-\ optional\ (OS_AUTH_TOKEN) +\ \ \ \ \ \ \-\-swift\-auth\-version\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version\ (ST_AUTH_VERSION) +\ \ \ \ \ \ \-\-swift\-chunk\-size\ SizeSuffix\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Above\ this\ size\ files\ will\ be\ chunked\ into\ a\ _segments\ container.\ (default\ 5G) +\ \ \ \ \ \ \-\-swift\-domain\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ domain\ \-\ optional\ (v3\ auth)\ (OS_USER_DOMAIN_NAME) +\ \ \ \ \ \ \-\-swift\-endpoint\-type\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Endpoint\ type\ to\ choose\ from\ the\ service\ catalogue\ (OS_ENDPOINT_TYPE)\ (default\ "public") +\ \ \ \ \ \ \-\-swift\-env\-auth\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Get\ swift\ credentials\ from\ environment\ variables\ in\ standard\ OpenStack\ form. +\ \ \ \ \ \ \-\-swift\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ API\ key\ or\ password\ (OS_PASSWORD). +\ \ \ \ \ \ \-\-swift\-no\-chunk\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ chunk\ files\ during\ streaming\ upload. +\ \ \ \ \ \ \-\-swift\-region\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Region\ name\ \-\ optional\ (OS_REGION_NAME) +\ \ \ \ \ \ \-\-swift\-storage\-policy\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ storage\ policy\ to\ use\ when\ creating\ a\ new\ container +\ \ \ \ \ \ \-\-swift\-storage\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ URL\ \-\ optional\ (OS_STORAGE_URL) +\ \ \ \ \ \ \-\-swift\-tenant\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Tenant\ name\ \-\ optional\ for\ v1\ auth,\ this\ or\ tenant_id\ required\ otherwise\ (OS_TENANT_NAME\ or\ OS_PROJECT_NAME) +\ \ \ \ \ \ \-\-swift\-tenant\-domain\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Tenant\ domain\ \-\ optional\ (v3\ auth)\ (OS_PROJECT_DOMAIN_NAME) +\ \ \ \ \ \ \-\-swift\-tenant\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Tenant\ ID\ \-\ optional\ for\ v1\ auth,\ this\ or\ tenant\ required\ otherwise\ (OS_TENANT_ID) +\ \ \ \ \ \ \-\-swift\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ to\ log\ in\ (OS_USERNAME). +\ \ \ \ \ \ \-\-swift\-user\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ ID\ to\ log\ in\ \-\ optional\ \-\ most\ swift\ systems\ use\ user\ and\ leave\ this\ blank\ (v3\ auth)\ (OS_USER_ID). +\ \ \ \ \ \ \-\-union\-remotes\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ List\ of\ space\ separated\ remotes. +\ \ \ \ \ \ \-\-webdav\-bearer\-token\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Bearer\ token\ instead\ of\ user/pass\ (eg\ a\ Macaroon) +\ \ \ \ \ \ \-\-webdav\-bearer\-token\-command\ string\ \ \ \ \ \ \ \ \ \ \ Command\ to\ run\ to\ get\ a\ bearer\ token +\ \ \ \ \ \ \-\-webdav\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password. +\ \ \ \ \ \ \-\-webdav\-url\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ http\ host\ to\ connect\ to +\ \ \ \ \ \ \-\-webdav\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name +\ \ \ \ \ \ \-\-webdav\-vendor\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Name\ of\ the\ Webdav\ site/service/software\ you\ are\ using +\ \ \ \ \ \ \-\-yandex\-client\-id\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Yandex\ Client\ Id +\ \ \ \ \ \ \-\-yandex\-client\-secret\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Yandex\ Client\ Secret +\ \ \ \ \ \ \-\-yandex\-unlink\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Remove\ existing\ public\ link\ to\ file/folder\ with\ link\ command\ rather\ than\ creating. +\f[] +.fi +.SS 1Fichier +.PP +This is a backend for the 1ficher (https://1fichier.com) cloud storage +service. +Note that a Premium subscription is required to use the API. +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for 1Fichier involves getting the API key from the +website which you need to do in your browser. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +[snip] +XX\ /\ 1Fichier +\ \ \ \\\ "fichier" +[snip] +Storage>\ fichier +**\ See\ help\ for\ fichier\ backend\ at:\ https://rclone.org/fichier/\ ** + +Your\ API\ Key,\ get\ it\ from\ https://1fichier.com/console/params.pl +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +api_key>\ example_key + +Edit\ advanced\ config?\ (y/n) +y)\ Yes +n)\ No +y/n>\ +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ fichier +api_key\ =\ example_key +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +List directories in top level of your 1Fichier account +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your 1Fichier account +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to a 1Fichier directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Modified time and hashes +.PP +1Fichier does not support modification times. +It supports the Whirlpool hash algorithm. +.SS Duplicated files +.PP +1Fichier can have two files with exactly the same name and path (unlike +a normal file system). +.PP +Duplicated files cause problems with the syncing and you will see +messages in the log about duplicates. +.SS Forbidden characters +.PP +1Fichier does not support the characters +\f[C]\\\ <\ >\ "\ \[aq]\ `\ $\f[] and spaces at the beginning of folder +names. +\f[C]rclone\f[] automatically escapes these to a unicode equivalent. +The exception is \f[C]/\f[], which cannot be escaped and will therefore +lead to errors. +.SS Standard Options +.PP +Here are the standard options specific to fichier (1Fichier). +.SS \[en]fichier\-api\-key +.PP +Your API Key, get it from https://1fichier.com/console/params.pl +.IP \[bu] 2 +Config: api_key +.IP \[bu] 2 +Env Var: RCLONE_FICHIER_API_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS Advanced Options +.PP +Here are the advanced options specific to fichier (1Fichier). +.SS \[en]fichier\-shared\-folder +.PP +If you want to download a shared folder, add this parameter +.IP \[bu] 2 +Config: shared_folder +.IP \[bu] 2 +Env Var: RCLONE_FICHIER_SHARED_FOLDER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" .SS Alias .PP The \f[C]alias\f[] remote provides a new name for another remote. @@ -9370,51 +10883,11 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Alias\ for\ an\ existing\ remote +[snip] +XX\ /\ Alias\ for\ an\ existing\ remote \ \ \ \\\ "alias" -\ 2\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 3\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 4\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 5\ /\ Box -\ \ \ \\\ "box" -\ 6\ /\ Cache\ a\ remote -\ \ \ \\\ "cache" -\ 7\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 8\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 9\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -10\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -11\ /\ Google\ Drive -\ \ \ \\\ "drive" -12\ /\ Hubic -\ \ \ \\\ "hubic" -13\ /\ Local\ Disk -\ \ \ \\\ "local" -14\ /\ Microsoft\ Azure\ Blob\ Storage -\ \ \ \\\ "azureblob" -15\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -16\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -17\ /\ Pcloud -\ \ \ \\\ "pcloud" -18\ /\ QingCloud\ Object\ Storage -\ \ \ \\\ "qingstor" -19\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -20\ /\ Webdav -\ \ \ \\\ "webdav" -21\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -22\ /\ http\ Connection -\ \ \ \\\ "http" -Storage>\ 1 +[snip] +Storage>\ alias Remote\ or\ path\ to\ alias. Can\ be\ "myremote:path/to/dir",\ "myremote:bucket",\ "myremote:"\ or\ "/local/path". remote>\ /mnt/storage/backup @@ -9553,35 +11026,11 @@ n/r/c/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive +[snip] +XX\ /\ Amazon\ Drive \ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 9\ /\ Hubic -\ \ \ \\\ "hubic" -10\ /\ Local\ Disk -\ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 1 +[snip] +Storage>\ amazon\ cloud\ drive Amazon\ Application\ Client\ Id\ \-\ required. client_id>\ your\ client\ ID\ goes\ here Amazon\ Application\ Client\ Secret\ \-\ required. @@ -9898,17 +11347,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Alias\ for\ an\ existing\ remote -\ \ \ \\\ "alias" -\ 2\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 3\ /\ Amazon\ S3\ Compliant\ Storage\ Providers\ (AWS,\ Ceph,\ Dreamhost,\ IBM\ COS,\ Minio) -\ \ \ \\\ "s3" -\ 4\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" [snip] -23\ /\ http\ Connection -\ \ \ \\\ "http" +XX\ /\ Amazon\ S3\ Compliant\ Storage\ Providers\ (AWS,\ Ceph,\ Dreamhost,\ IBM\ COS,\ Minio) +\ \ \ \\\ "s3" +[snip] Storage>\ s3 Choose\ your\ S3\ provider. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value @@ -10062,6 +11504,8 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ \ \ \\\ "GLACIER" \ 7\ /\ Glacier\ Deep\ Archive\ storage\ class \ \ \ \\\ "DEEP_ARCHIVE" +\ 8\ /\ Intelligent\-Tiering\ storage\ class +\ \ \ \\\ "INTELLIGENT_TIERING" storage_class>\ 1 Remote\ config \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- @@ -10231,6 +11675,9 @@ written to: .IP \[bu] 2 \f[C]PutObjectACL\f[] .PP +When using the \f[C]lsd\f[] subcommand, the \f[C]ListAllMyBuckets\f[] +permission is required. +.PP Example policy: .IP .nf @@ -10254,7 +11701,12 @@ Example policy: \ \ \ \ \ \ \ \ \ \ \ \ \ \ "arn:aws:s3:::BUCKET_NAME/*", \ \ \ \ \ \ \ \ \ \ \ \ \ \ "arn:aws:s3:::BUCKET_NAME" \ \ \ \ \ \ \ \ \ \ \ \ ] -\ \ \ \ \ \ \ \ } +\ \ \ \ \ \ \ \ }, +\ \ \ \ \ \ \ \ { +\ \ \ \ \ \ \ \ \ \ \ \ "Effect":\ "Allow", +\ \ \ \ \ \ \ \ \ \ \ \ "Action":\ "s3:ListAllMyBuckets", +\ \ \ \ \ \ \ \ \ \ \ \ "Resource":\ "arn:aws:s3:::*" +\ \ \ \ \ \ \ \ }\ \ \ \ \ \ \ ] } \f[] @@ -11572,6 +13024,12 @@ Glacier storage class .IP \[bu] 2 Glacier Deep Archive storage class .RE +.IP \[bu] 2 +\[lq]INTELLIGENT_TIERING\[rq] +.RS 2 +.IP \[bu] 2 +Intelligent\-Tiering storage class +.RE .RE .SS \[en]s3\-storage\-class .PP @@ -12311,9 +13769,8 @@ n/s>\ n name>\ wasabi Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) +[snip] +XX\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) \ \ \ \\\ "s3" [snip] Storage>\ s3 @@ -12564,33 +14021,11 @@ n/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 +[snip] +XX\ /\ Backblaze\ B2 \ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 3 +[snip] +Storage>\ b2 Account\ ID\ or\ Application\ Key\ ID account>\ 123456789abc Application\ Key @@ -12809,8 +14244,9 @@ All copy commands send the following 4 requests: The \f[C]b2_list_file_names\f[] request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. -As of version 1.33 issue #818 (https://github.com/rclone/rclone/issues/818) -causes extra requests to be sent when using B2 with Crypt. +As of version 1.33 issue +#818 (https://github.com/rclone/rclone/issues/818) causes extra requests +to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent. .PP @@ -12869,6 +14305,38 @@ nearest millisecond appended to them. .PP Note that when using \f[C]\-\-b2\-versions\f[] no file write operations are permitted, so you can't upload files or delete them. +.SS B2 and rclone link +.PP +Rclone supports generating file share links for private B2 buckets. +They can either be for a file for example: +.IP +.nf +\f[C] +\&./rclone\ link\ B2:bucket/path/to/file.txt +https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx +\f[] +.fi +.PP +or if run on a directory you will get: +.IP +.nf +\f[C] +\&./rclone\ link\ B2:bucket/path +https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx +\f[] +.fi +.PP +you can then use the authorization token (the part of the url from the +\f[C]?Authorization=\f[] on) on any file path under that directory. +For example: +.IP +.nf +\f[C] +https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx +https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx +\f[] +.fi .SS Standard Options .PP Here are the standard options specific to b2 (Backblaze B2). @@ -13008,6 +14476,7 @@ Custom endpoint for downloads. .PP This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. +This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. .IP \[bu] 2 Config: download_url @@ -13017,6 +14486,22 @@ Env Var: RCLONE_B2_DOWNLOAD_URL Type: string .IP \[bu] 2 Default: "" +.SS \[en]b2\-download\-auth\-duration +.PP +Time before the authorization token will expire in s or suffix +ms|s|m|h|d. +.PP +The duration before the download authorization token will expire. +The minimum value is 1 second. +The maximum value is one week. +.IP \[bu] 2 +Config: download_auth_duration +.IP \[bu] 2 +Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION +.IP \[bu] 2 +Type: Duration +.IP \[bu] 2 +Default: 1w .SS Box .PP Paths are specified as \f[C]remote:path\f[] @@ -13049,38 +14534,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Box +[snip] +XX\ /\ Box \ \ \ \\\ "box" -\ 5\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 6\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 7\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 9\ /\ Google\ Drive -\ \ \ \\\ "drive" -10\ /\ Hubic -\ \ \ \\\ "hubic" -11\ /\ Local\ Disk -\ \ \ \\\ "local" -12\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -13\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -14\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -15\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -16\ /\ http\ Connection -\ \ \ \\\ "http" +[snip] Storage>\ box Box\ App\ Client\ Id\ \-\ leave\ blank\ normally. client_id>\ @@ -13359,11 +14816,11 @@ n/r/c/s/q>\ n name>\ test\-cache Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\&... -\ 5\ /\ Cache\ a\ remote +[snip] +XX\ /\ Cache\ a\ remote \ \ \ \\\ "cache" -\&... -Storage>\ 5 +[snip] +Storage>\ cache Remote\ to\ cache. Normally\ should\ contain\ a\ \[aq]:\[aq]\ and\ a\ path,\ eg\ "myremote:path/to/dir", "myremote:bucket"\ or\ maybe\ "myremote:"\ (not\ recommended). @@ -14099,33 +15556,11 @@ n/s/q>\ n\ \ \ name>\ secret Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote +[snip] +XX\ /\ Encrypt/Decrypt\ a\ remote \ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 5 +[snip] +Storage>\ crypt Remote\ to\ encrypt/decrypt. Normally\ should\ contain\ a\ \[aq]:\[aq]\ and\ a\ path,\ eg\ "myremote:path/to/dir", "myremote:bucket"\ or\ maybe\ "myremote:"\ (not\ recommended). @@ -14678,33 +16113,11 @@ e/n/d/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox +[snip] +XX\ /\ Dropbox \ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 4 +[snip] +Storage>\ dropbox Dropbox\ App\ Key\ \-\ leave\ blank\ normally. app_key> Dropbox\ App\ Secret\ \-\ leave\ blank\ normally. @@ -14894,7 +16307,7 @@ Type\ of\ storage\ to\ configure. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value [snip] -10\ /\ FTP\ Connection +XX\ /\ FTP\ Connection \ \ \ \\\ "ftp" [snip] Storage>\ ftp @@ -15125,33 +16538,11 @@ e/n/d/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) +[snip] +XX\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) \ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 6 +[snip] +Storage>\ google\ cloud\ storage Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally. client_id> Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally. @@ -15764,7 +17155,7 @@ name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value [snip] -10\ /\ Google\ Drive +XX\ /\ Google\ Drive \ \ \ \\\ "drive" [snip] Storage>\ drive @@ -16840,8 +18231,8 @@ export URLs for drive documents. Users have reported that the official export URLs can't export large documents, whereas these unofficial ones can. .PP -See rclone issue #2243 (https://github.com/rclone/rclone/issues/2243) for -background, this google drive +See rclone issue #2243 (https://github.com/rclone/rclone/issues/2243) +for background, this google drive issue (https://issuetracker.google.com/issues/36761333) and this helpful post (https://www.labnol.org/internet/direct-links-for-google-drive/28356/). .IP \[bu] 2 @@ -17058,7 +18449,7 @@ It doesn't matter what Google account you use. Select a project or create a new project. .IP "3." 3 Under \[lq]ENABLE APIS AND SERVICES\[rq] search for \[lq]Drive\[rq], and -enable the then \[lq]Google Drive API\[rq]. +enable the \[lq]Google Drive API\[rq]. .IP "4." 3 Click \[lq]Credentials\[rq] in the left\-side panel (not \[lq]Create credentials\[rq], which opens the wizard), then \[lq]Create @@ -17075,6 +18466,412 @@ Use these values in rclone config to add a new remote or edit an existing remote. .PP (Thanks to \@balazer on github for these instructions.) +.SS Google Photos +.PP +The rclone backend for Google +Photos (https://www.google.com/photos/about/) is a specialized backend +for transferring photos and videos to and from Google Photos. +.PP +\f[B]NB\f[] The Google Photos API which rclone uses has quite a few +limitations, so please read the limitations section carefully to make +sure it is suitable for your use. +.SS Configuring Google Photos +.PP +The initial setup for google cloud storage involves getting a token from +Google Photos which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +[snip] +XX\ /\ Google\ Photos +\ \ \ \\\ "google\ photos" +[snip] +Storage>\ google\ photos +**\ See\ help\ for\ google\ photos\ backend\ at:\ https://rclone.org/googlephotos/\ ** + +Google\ Application\ Client\ Id +Leave\ blank\ normally. +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +client_id>\ +Google\ Application\ Client\ Secret +Leave\ blank\ normally. +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +client_secret>\ +Set\ to\ make\ the\ Google\ Photos\ backend\ read\ only. + +If\ you\ choose\ read\ only\ then\ rclone\ will\ only\ request\ read\ only\ access +to\ your\ photos,\ otherwise\ rclone\ will\ request\ full\ access. +Enter\ a\ boolean\ value\ (true\ or\ false).\ Press\ Enter\ for\ the\ default\ ("false"). +read_only>\ +Edit\ advanced\ config?\ (y/n) +y)\ Yes +n)\ No +y/n>\ n +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code + +***\ IMPORTANT:\ All\ media\ items\ uploaded\ to\ Google\ Photos\ with\ rclone +***\ are\ stored\ in\ full\ resolution\ at\ original\ quality.\ \ These\ uploads +***\ will\ count\ towards\ storage\ in\ your\ Google\ Account. + +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ google\ photos +token\ =\ {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019\-06\-28T17:38:04.644930156+01:00"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this may require you to +unblock it temporarily if you are running a host firewall, or use manual +mode. +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all the albums in your photos +.IP +.nf +\f[C] +rclone\ lsd\ remote:album +\f[] +.fi +.PP +Make a new album +.IP +.nf +\f[C] +rclone\ mkdir\ remote:album/newAlbum +\f[] +.fi +.PP +List the contents of an album +.IP +.nf +\f[C] +rclone\ ls\ remote:album/newAlbum +\f[] +.fi +.PP +Sync \f[C]/home/local/images\f[] to the Google Photos, removing any +excess files in the album. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/image\ remote:album/newAlbum +\f[] +.fi +.SS Layout +.PP +As Google Photos is not a general purpose cloud storage system the +backend is laid out to help you navigate it. +.PP +The directories under \f[C]media\f[] show different ways of categorizing +the media. +Each file will appear multiple times. +So if you want to make a backup of your google photos you might choose +to backup \f[C]remote:media/by\-month\f[]. +(\f[B]NB\f[] \f[C]remote:media/by\-day\f[] is rather slow at the moment +so avoid for syncing.) +.PP +Note that all your photos and videos will appear somewhere under +\f[C]media\f[], but they may not appear under \f[C]album\f[] unless +you've put them into albums. +.IP +.nf +\f[C] +/ +\-\ upload +\ \ \ \ \-\ file1.jpg +\ \ \ \ \-\ file2.jpg +\ \ \ \ \-\ ... +\-\ media +\ \ \ \ \-\ all +\ \ \ \ \ \ \ \ \-\ file1.jpg +\ \ \ \ \ \ \ \ \-\ file2.jpg +\ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \-\ by\-year +\ \ \ \ \ \ \ \ \-\ 2000 +\ \ \ \ \ \ \ \ \ \ \ \ \-\ file1.jpg +\ \ \ \ \ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \ \ \ \ \-\ 2001 +\ \ \ \ \ \ \ \ \ \ \ \ \-\ file2.jpg +\ \ \ \ \ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \-\ by\-month +\ \ \ \ \ \ \ \ \-\ 2000 +\ \ \ \ \ \ \ \ \ \ \ \ \-\ 2000\-01 +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ file1.jpg +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \ \ \ \ \ \ \ \ \-\ 2000\-02 +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ file2.jpg +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \-\ by\-day +\ \ \ \ \ \ \ \ \-\ 2000 +\ \ \ \ \ \ \ \ \ \ \ \ \-\ 2000\-01\-01 +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ file1.jpg +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \ \ \ \ \ \ \ \ \-\ 2000\-01\-02 +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ file2.jpg +\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ ... +\ \ \ \ \ \ \ \ \-\ ... +\-\ album +\ \ \ \ \-\ album\ name +\ \ \ \ \-\ album\ name/sub +\-\ shared\-album +\ \ \ \ \-\ album\ name +\ \ \ \ \-\ album\ name/sub +\f[] +.fi +.PP +There are two writable parts of the tree, the \f[C]upload\f[] directory +and sub directories of the the \f[C]album\f[] directory. +.PP +The \f[C]upload\f[] directory is for uploading files you don't want to +put into albums. +This will be empty to start with and will contain the files you've +uploaded for one rclone session only, becoming empty again when you +restart rclone. +The use case for this would be if you have a load of files you just want +to once off dump into Google Photos. +For repeated syncing, uploading to \f[C]album\f[] will work better. +.PP +Directories within the \f[C]album\f[] directory are also writeable and +you may create new directories (albums) under \f[C]album\f[]. +If you copy files with a directory hierarchy in there then rclone will +create albums with the \f[C]/\f[] character in them. +For example if you do +.IP +.nf +\f[C] +rclone\ copy\ /path/to/images\ remote:album/images +\f[] +.fi +.PP +and the images directory contains +.IP +.nf +\f[C] +images +\ \ \ \ \-\ file1.jpg +\ \ \ \ dir +\ \ \ \ \ \ \ \ file2.jpg +\ \ \ \ dir2 +\ \ \ \ \ \ \ \ dir3 +\ \ \ \ \ \ \ \ \ \ \ \ file3.jpg +\f[] +.fi +.PP +Then rclone will create the following albums with the following files in +.IP \[bu] 2 +images +.RS 2 +.IP \[bu] 2 +file1.jpg +.RE +.IP \[bu] 2 +images/dir +.RS 2 +.IP \[bu] 2 +file2.jpg +.RE +.IP \[bu] 2 +images/dir2/dir3 +.RS 2 +.IP \[bu] 2 +file3.jpg +.RE +.PP +This means that you can use the \f[C]album\f[] path pretty much like a +normal filesystem and it is a good target for repeated syncing. +.PP +The \f[C]shared\-album\f[] directory shows albums shared with you or by +you. +This is similar to the Sharing tab in the Google Photos web interface. +.SS Limitations +.PP +Only images and videos can be uploaded. +If you attempt to upload non videos or images or formats that Google +Photos doesn't understand, rclone will upload the file, then Google +Photos will give an error when it is put turned into a media item. +.PP +Note that all media items uploaded to Google Photos through the API are +stored in full resolution at \[lq]original quality\[rq] and +\f[B]will\f[] count towards your storage quota in your Google Account. +The API does \f[B]not\f[] offer a way to upload in \[lq]high +quality\[rq] mode.. +.SS Downloading Images +.PP +When Images are downloaded this strips EXIF location (according to the +docs and my tests). +This is a limitation of the Google Photos API and is covered by bug +#112096115 (https://issuetracker.google.com/issues/112096115). +.SS Downloading Videos +.PP +When videos are downloaded they are downloaded in a really compressed +version of the video compared to downloading it via the Google Photos +web interface. +This is covered by bug +#113672044 (https://issuetracker.google.com/issues/113672044). +.SS Duplicates +.PP +If a file name is duplicated in a directory then rclone will add the +file ID into its name. +So two files called \f[C]file.jpg\f[] would then appear as +\f[C]file\ {123456}.jpg\f[] and \f[C]file\ {ABCDEF}.jpg\f[] (the actual +IDs are a lot longer alas!). +.PP +If you upload the same image (with the same binary data) twice then +Google Photos will deduplicate it. +However it will retain the filename from the first upload which may +confuse rclone. +For example if you uploaded an image to \f[C]upload\f[] then uploaded +the same image to \f[C]album/my_album\f[] the filename of the image in +\f[C]album/my_album\f[] will be what it was uploaded with initially, not +what you uploaded it with to \f[C]album\f[]. +In practise this shouldn't cause too many problems. +.SS Modified time +.PP +The date shown of media in Google Photos is the creation date as +determined by the EXIF information, or the upload date if that is not +known. +.PP +This is not changeable by rclone and is not the modification date of the +media on local disk. +This means that rclone cannot use the dates from Google Photos for +syncing purposes. +.SS Size +.PP +The Google Photos API does not return the size of media. +This means that when syncing to Google Photos, rclone can only do a file +existence check. +.PP +It is possible to read the size of the media, but this needs an extra +HTTP HEAD request per media item so is very slow and uses up a lot of +transactions. +This can be enabled with the \f[C]\-\-gphotos\-read\-size\f[] option or +the \f[C]read_size\ =\ true\f[] config parameter. +.PP +If you want to use the backend with \f[C]rclone\ mount\f[] you will need +to enable this flag otherwise you will not be able to read media off the +mount. +.SS Albums +.PP +Rclone can only upload files to albums it created. +This is a limitation of the Google Photos +API (https://developers.google.com/photos/library/guides/manage-albums). +.PP +Rclone can remove files it uploaded from albums it created only. +.SS Deleting files +.PP +Rclone can remove files from albums it created, but note that the Google +Photos API does not allow media to be deleted permanently so this media +will still remain. +See bug #109759781 (https://issuetracker.google.com/issues/109759781). +.PP +Rclone cannot delete files anywhere except under \f[C]album\f[]. +.SS Deleting albums +.PP +The Google Photos API does not support deleting albums \- see bug +#135714733 (https://issuetracker.google.com/issues/135714733). +.SS Standard Options +.PP +Here are the standard options specific to google photos (Google Photos). +.SS \[en]gphotos\-client\-id +.PP +Google Application Client Id Leave blank normally. +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_GPHOTOS_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS \[en]gphotos\-client\-secret +.PP +Google Application Client Secret Leave blank normally. +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_GPHOTOS_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS \[en]gphotos\-read\-only +.PP +Set to make the Google Photos backend read only. +.PP +If you choose read only then rclone will only request read only access +to your photos, otherwise rclone will request full access. +.IP \[bu] 2 +Config: read_only +.IP \[bu] 2 +Env Var: RCLONE_GPHOTOS_READ_ONLY +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS Advanced Options +.PP +Here are the advanced options specific to google photos (Google Photos). +.SS \[en]gphotos\-read\-size +.PP +Set to read the size of media items. +.PP +Normally rclone does not read the size of media items since this takes +another transaction. +This isn't necessary for syncing. +However rclone mount needs to know the size of files in advance of +reading them, so setting this flag when using rclone mount is +recommended if you want to read the media. +.IP \[bu] 2 +Config: read_size +.IP \[bu] 2 +Env Var: RCLONE_GPHOTOS_READ_SIZE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS HTTP .PP The HTTP remote is a read only remote for reading files of a webserver. @@ -17107,36 +18904,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 9\ /\ Hubic -\ \ \ \\\ "hubic" -10\ /\ Local\ Disk -\ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -15\ /\ http\ Connection +[snip] +XX\ /\ http\ Connection \ \ \ \\\ "http" +[snip] Storage>\ http URL\ of\ http\ host\ to\ connect\ to Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value @@ -17247,6 +19018,28 @@ Connect to example.com using a username and password .SS Advanced Options .PP Here are the advanced options specific to http (http Connection). +.SS \[en]http\-headers +.PP +Set HTTP headers for all transactions +.PP +Use this to set additional HTTP headers for all transactions +.PP +The input format is comma separated list of key,value pairs. +Standard CSV encoding (https://godoc.org/encoding/csv) may be used. +.PP +For example to set a Cookie use `Cookie,name=value', or +`\[lq]Cookie\[rq],\[lq]name=value\[rq]'. +.PP +You can set multiple headers, eg +`\[lq]Cookie\[rq],\[lq]name=value\[rq],\[lq]Authorization\[rq],\[lq]xxx\[rq]'. +.IP \[bu] 2 +Config: headers +.IP \[bu] 2 +Env Var: RCLONE_HTTP_HEADERS +.IP \[bu] 2 +Type: CommaSepList +.IP \[bu] 2 +Default: .SS \[en]http\-no\-slash .PP Set this if the site doesn't end directories with / @@ -17301,33 +19094,11 @@ n/s>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic +[snip] +XX\ /\ Hubic \ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 8 +[snip] +Storage>\ hubic Hubic\ Client\ Id\ \-\ leave\ blank\ normally. client_id> Hubic\ Client\ Secret\ \-\ leave\ blank\ normally. @@ -17520,15 +19291,12 @@ Type\ of\ storage\ to\ configure. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value [snip] -14\ /\ JottaCloud +XX\ /\ JottaCloud \ \ \ \\\ "jottacloud" [snip] Storage>\ jottacloud **\ See\ help\ for\ jottacloud\ backend\ at:\ https://rclone.org/jottacloud/\ ** -User\ Name: -Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). -user>\ user\@email.tld Edit\ advanced\ config?\ (y/n) y)\ Yes n)\ No @@ -17542,6 +19310,7 @@ Rclone\ has\ it\[aq]s\ own\ Jottacloud\ API\ KEY\ which\ works\ fine\ as\ long\ y)\ Yes n)\ No y/n>\ y +Username>\ 0xC4KE\@gmail.com Your\ Jottacloud\ password\ is\ only\ required\ during\ setup\ and\ will\ not\ be\ stored. password: @@ -17553,7 +19322,7 @@ y/n>\ y Please\ select\ the\ device\ to\ use.\ Normally\ this\ will\ be\ Jotta Choose\ a\ number\ from\ below,\ or\ type\ in\ an\ existing\ value \ 1\ >\ DESKTOP\-3H31129 -\ 2\ >\ test1 +\ 2\ >\ fla1 \ 3\ >\ Jotta Devices>\ 3 Please\ select\ the\ mountpoint\ to\ user.\ Normally\ this\ will\ be\ Archive @@ -17669,20 +19438,6 @@ You will asked during setting up the remote. Please be aware that this also means that copying the rclone config from one machine to another does NOT work with Jottacloud accounts. You have to create it on each machine. -.SS Standard Options -.PP -Here are the standard options specific to jottacloud (JottaCloud). -.SS \[en]jottacloud\-user -.PP -User Name: -.IP \[bu] 2 -Config: user -.IP \[bu] 2 -Env Var: RCLONE_JOTTACLOUD_USER -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Default: "" .SS Advanced Options .PP Here are the advanced options specific to jottacloud (JottaCloud). @@ -17788,60 +19543,10 @@ name>\ koofr\ Type\ of\ storage\ to\ configure. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ A\ stackable\ unification\ remote,\ which\ can\ appear\ to\ merge\ the\ contents\ of\ several\ remotes -\ \ \ \\\ "union" -\ 2\ /\ Alias\ for\ an\ existing\ remote -\ \ \ \\\ "alias" -\ 3\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 4\ /\ Amazon\ S3\ Compliant\ Storage\ Provider\ (AWS,\ Alibaba,\ Ceph,\ Digital\ Ocean,\ Dreamhost,\ IBM\ COS,\ Minio,\ etc) -\ \ \ \\\ "s3" -\ 5\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 6\ /\ Box -\ \ \ \\\ "box" -\ 7\ /\ Cache\ a\ remote -\ \ \ \\\ "cache" -\ 8\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 9\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -10\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -11\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -12\ /\ Google\ Drive -\ \ \ \\\ "drive" -13\ /\ Hubic -\ \ \ \\\ "hubic" -14\ /\ JottaCloud -\ \ \ \\\ "jottacloud" -15\ /\ Koofr +[snip] +XX\ /\ Koofr \ \ \ \\\ "koofr" -16\ /\ Local\ Disk -\ \ \ \\\ "local" -17\ /\ Mega -\ \ \ \\\ "mega" -18\ /\ Microsoft\ Azure\ Blob\ Storage -\ \ \ \\\ "azureblob" -19\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -20\ /\ OpenDrive -\ \ \ \\\ "opendrive" -21\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -22\ /\ Pcloud -\ \ \ \\\ "pcloud" -23\ /\ QingCloud\ Object\ Storage -\ \ \ \\\ "qingstor" -24\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -25\ /\ Webdav -\ \ \ \\\ "webdav" -26\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -27\ /\ http\ Connection -\ \ \ \\\ "http" +[snip] Storage>\ koofr **\ See\ help\ for\ koofr\ backend\ at:\ https://rclone.org/koofr/\ ** @@ -17956,6 +19661,19 @@ Env Var: RCLONE_KOOFR_MOUNTID Type: string .IP \[bu] 2 Default: "" +.SS \[en]koofr\-setmtime +.PP +Does the backend support setting modification time. +Set this to false if you use a mount ID that points to a Dropbox or +Amazon Drive backend. +.IP \[bu] 2 +Config: setmtime +.IP \[bu] 2 +Env Var: RCLONE_KOOFR_SETMTIME +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true .SS Limitations .PP Note that Koofr is case insensitive so you can't have a file called @@ -17997,14 +19715,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Alias\ for\ an\ existing\ remote -\ \ \ \\\ "alias" [snip] -14\ /\ Mega +XX\ /\ Mega \ \ \ \\\ "mega" [snip] -23\ /\ http\ Connection -\ \ \ \\\ "http" Storage>\ mega User\ name user>\ you\@example.com @@ -18217,40 +19931,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Box -\ \ \ \\\ "box" -\ 5\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 6\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 7\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 9\ /\ Google\ Drive -\ \ \ \\\ "drive" -10\ /\ Hubic -\ \ \ \\\ "hubic" -11\ /\ Local\ Disk -\ \ \ \\\ "local" -12\ /\ Microsoft\ Azure\ Blob\ Storage +[snip] +XX\ /\ Microsoft\ Azure\ Blob\ Storage \ \ \ \\\ "azureblob" -13\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -14\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -15\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -16\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -17\ /\ http\ Connection -\ \ \ \\\ "http" +[snip] Storage>\ azureblob Storage\ Account\ Name account>\ account_name @@ -18392,7 +20076,7 @@ Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). .SS \[en]azureblob\-account .PP -Storage Account Name (leave blank to use connection string or SAS URL) +Storage Account Name (leave blank to use SAS URL or Emulator) .IP \[bu] 2 Config: account .IP \[bu] 2 @@ -18403,7 +20087,7 @@ Type: string Default: "" .SS \[en]azureblob\-key .PP -Storage Account Key (leave blank to use connection string or SAS URL) +Storage Account Key (leave blank to use SAS URL or Emulator) .IP \[bu] 2 Config: key .IP \[bu] 2 @@ -18415,7 +20099,7 @@ Default: "" .SS \[en]azureblob\-sas\-url .PP SAS URL for container level access only (leave blank if using -account/key or connection string) +account/key or Emulator) .IP \[bu] 2 Config: sas_url .IP \[bu] 2 @@ -18424,6 +20108,18 @@ Env Var: RCLONE_AZUREBLOB_SAS_URL Type: string .IP \[bu] 2 Default: "" +.SS \[en]azureblob\-use\-emulator +.PP +Uses local storage emulator if provided as `true' (leave blank if using +real azure storage endpoint) +.IP \[bu] 2 +Config: use_emulator +.IP \[bu] 2 +Env Var: RCLONE_AZUREBLOB_USE_EMULATOR +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS Advanced Options .PP Here are the advanced options specific to azureblob (Microsoft Azure @@ -18516,6 +20212,13 @@ Default: "" MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. +.SS Azure Storage Emulator Support +.PP +You can test rlcone with storage emulator locally, to do this make sure +azure storage emulator installed locally and set up a new remote with +\f[C]rclone\ config\f[] follow instructions described in introduction, +set \f[C]use_emulator\f[] config as \f[C]true\f[], you do not need to +provide default account name or key if using emulator. .SS Microsoft OneDrive .PP Paths are specified as \f[C]remote:path\f[] @@ -18552,11 +20255,11 @@ name>\ remote Type\ of\ storage\ to\ configure. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\&... -18\ /\ Microsoft\ OneDrive +[snip] +XX\ /\ Microsoft\ OneDrive \ \ \ \\\ "onedrive" -\&... -Storage>\ 18 +[snip] +Storage>\ onedrive Microsoft\ App\ Client\ Id Leave\ blank\ normally. Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). @@ -18938,35 +20641,11 @@ e/n/d/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ OpenDrive +[snip] +XX\ /\ OpenDrive \ \ \ \\\ "opendrive" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 10 +[snip] +Storage>\ opendrive Username username> Password @@ -19083,37 +20762,11 @@ n/r/c/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 9\ /\ Hubic -\ \ \ \\\ "hubic" -10\ /\ Local\ Disk -\ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ QingStor\ Object\ Storage +[snip] +XX\ /\ QingStor\ Object\ Storage \ \ \ \\\ "qingstor" -14\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -15\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -Storage>\ 13 +[snip] +Storage>\ qingstor Get\ QingStor\ credentials\ from\ runtime.\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value \ 1\ /\ Enter\ QingStor\ credentials\ in\ the\ next\ step @@ -19459,48 +21112,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Box -\ \ \ \\\ "box" -\ 5\ /\ Cache\ a\ remote -\ \ \ \\\ "cache" -\ 6\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 7\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 8\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 9\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -10\ /\ Google\ Drive -\ \ \ \\\ "drive" -11\ /\ Hubic -\ \ \ \\\ "hubic" -12\ /\ Local\ Disk -\ \ \ \\\ "local" -13\ /\ Microsoft\ Azure\ Blob\ Storage -\ \ \ \\\ "azureblob" -14\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -15\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) +[snip] +XX\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) \ \ \ \\\ "swift" -16\ /\ Pcloud -\ \ \ \\\ "pcloud" -17\ /\ QingCloud\ Object\ Storage -\ \ \ \\\ "qingstor" -18\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -19\ /\ Webdav -\ \ \ \\\ "webdav" -20\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -21\ /\ http\ Connection -\ \ \ \\\ "http" +[snip] Storage>\ swift Get\ swift\ credentials\ from\ environment\ variables\ in\ standard\ OpenStack\ form. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value @@ -20122,44 +21737,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Box -\ \ \ \\\ "box" -\ 5\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 6\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 7\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 9\ /\ Google\ Drive -\ \ \ \\\ "drive" -10\ /\ Hubic -\ \ \ \\\ "hubic" -11\ /\ Local\ Disk -\ \ \ \\\ "local" -12\ /\ Microsoft\ Azure\ Blob\ Storage -\ \ \ \\\ "azureblob" -13\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -14\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -15\ /\ Pcloud +[snip] +XX\ /\ Pcloud \ \ \ \\\ "pcloud" -16\ /\ QingCloud\ Object\ Storage -\ \ \ \\\ "qingstor" -17\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -18\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -19\ /\ http\ Connection -\ \ \ \\\ "http" +[snip] Storage>\ pcloud Pcloud\ App\ Client\ Id\ \-\ leave\ blank\ normally. client_id>\ @@ -20264,11 +21845,257 @@ Env Var: RCLONE_PCLOUD_CLIENT_SECRET Type: string .IP \[bu] 2 Default: "" +.SS premiumize.me +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for premiumize.me (https://premiumize.me/) involves +getting a token from premiumize.me which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ remote +Type\ of\ storage\ to\ configure. +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +[snip] +XX\ /\ premiumize.me +\ \ \ \\\ "premiumizeme" +[snip] +Storage>\ premiumizeme +**\ See\ help\ for\ premiumizeme\ backend\ at:\ https://rclone.org/premiumizeme/\ ** + +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ premiumizeme +token\ =\ {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029\-08\-07T18:44:15.548915378+01:00"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ +\f[] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from premiumize.me. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +List directories in top level of your premiumize.me +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your premiumize.me +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to an premiumize.me directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Modified time and hashes +.PP +premiumize.me does not support modification times or hashes, therefore +syncing will default to \f[C]\-\-size\-only\f[] checking. +Note that using \f[C]\-\-update\f[] will work. +.SS Standard Options +.PP +Here are the standard options specific to premiumizeme (premiumize.me). +.SS \[en]premiumizeme\-api\-key +.PP +API Key. +.PP +This is not normally used \- use oauth instead. +.IP \[bu] 2 +Config: api_key +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_API_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS Limitations +.PP +Note that premiumize.me is case insensitive so you can't have a file +called \[lq]Hello.doc\[rq] and one called \[lq]hello.doc\[rq]. +.PP +premiumize.me file names can't have the \f[C]\\\f[] or \f[C]"\f[] +characters in. +rclone maps these to and from an identical looking unicode equivalents +\f[C]\\f[] and \f[C]"\f[] +.PP +premiumize.me only supports filenames up to 255 characters in length. +.SS put.io +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +put.io paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for put.io involves getting a token from put.io which +you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +n/s/q>\ n +name>\ putio +Type\ of\ storage\ to\ configure. +Enter\ a\ string\ value.\ Press\ Enter\ for\ the\ default\ (""). +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +[snip] +XX\ /\ Put.io +\ \ \ \\\ "putio" +[snip] +Storage>\ putio +**\ See\ help\ for\ putio\ backend\ at:\ https://rclone.org/putio/\ ** + +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[putio] +type\ =\ putio +token\ =\ {"access_token":"XXXXXXXX","expiry":"0001\-01\-01T00:00:00Z"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +Current\ remotes: + +Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type +====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ==== +putio\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ putio + +e)\ Edit\ existing\ remote +n)\ New\ remote +d)\ Delete\ remote +r)\ Rename\ remote +c)\ Copy\ remote +s)\ Set\ configuration\ password +q)\ Quit\ config +e/n/d/r/c/s/q>\ q +\f[] +.fi +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall, or use +manual mode. +.PP +You can then use it like this, +.PP +List directories in top level of your put.io +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your put.io +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to a put.io directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi .SS SFTP .PP SFTP is the Secure (or SSH) File Transfer Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). .PP +The SFTP backend can be used with a number of different providers: +.IP \[bu] 2 +C14 +.IP \[bu] 2 +rsync.net +.PP SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. .PP @@ -20302,36 +22129,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 8\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 9\ /\ Hubic -\ \ \ \\\ "hubic" -10\ /\ Local\ Disk -\ \ \ \\\ "local" -11\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -13\ /\ SSH/SFTP\ Connection +[snip] +XX\ /\ SSH/SFTP\ Connection \ \ \ \\\ "sftp" -14\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -15\ /\ http\ Connection -\ \ \ \\\ "http" +[snip] Storage>\ sftp SSH\ host\ to\ connect\ to Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value @@ -20341,22 +22142,22 @@ host>\ example.com SSH\ username,\ leave\ blank\ for\ current\ username,\ ncw user>\ sftpuser SSH\ port,\ leave\ blank\ to\ use\ default\ (22) -port>\ +port> SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent. y)\ Yes\ type\ in\ my\ own\ password g)\ Generate\ random\ password n)\ No\ leave\ this\ optional\ password\ blank y/g/n>\ n Path\ to\ unencrypted\ PEM\-encoded\ private\ key\ file,\ leave\ blank\ to\ use\ ssh\-agent. -key_file>\ +key_file> Remote\ config \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [remote] host\ =\ example.com user\ =\ sftpuser -port\ =\ -pass\ =\ -key_file\ =\ +port\ = +pass\ = +key_file\ = \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote @@ -20563,9 +22364,11 @@ Type: bool Default: false .SS \[en]sftp\-use\-insecure\-cipher .PP -Enable the use of the aes128\-cbc cipher. -This cipher is insecure and may allow plaintext data to be recovered by -an attacker. +Enable the use of the aes128\-cbc cipher and +diffie\-hellman\-group\-exchange\-sha256, +diffie\-hellman\-group\-exchange\-sha1 key exchange. +Those algorithms are insecure and may allow plaintext data to be +recovered by an attacker. .IP \[bu] 2 Config: use_insecure_cipher .IP \[bu] 2 @@ -20587,7 +22390,9 @@ Use default Cipher list. \[lq]true\[rq] .RS 2 .IP \[bu] 2 -Enables the use of the aes128\-cbc cipher. +Enables the use of the aes128\-cbc cipher and +diffie\-hellman\-group\-exchange\-sha256, +diffie\-hellman\-group\-exchange\-sha1 key exchange. .RE .RE .SS \[en]sftp\-disable\-hashcheck @@ -20659,6 +22464,30 @@ Env Var: RCLONE_SFTP_SET_MODTIME Type: bool .IP \[bu] 2 Default: true +.SS \[en]sftp\-md5sum\-command +.PP +The command used to read md5 hashes. +Leave blank for autodetect. +.IP \[bu] 2 +Config: md5sum_command +.IP \[bu] 2 +Env Var: RCLONE_SFTP_MD5SUM_COMMAND +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" +.SS \[en]sftp\-sha1sum\-command +.PP +The command used to read sha1 hashes. +Leave blank for autodetect. +.IP \[bu] 2 +Config: sha1sum_command +.IP \[bu] 2 +Env Var: RCLONE_SFTP_SHA1SUM_COMMAND +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" .SS Limitations .PP SFTP supports checksums if the same login has shell access and @@ -20703,6 +22532,18 @@ with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[], .PP Note that \f[C]\-\-timeout\f[] isn't supported (but \f[C]\-\-contimeout\f[] is). +.SS C14 +.PP +C14 is supported through the SFTP backend. +.PP +See C14's +documentation (https://www.online.net/en/storage/c14-cold-storage) +.SS rsync.net +.PP +rsync.net is supported through the SFTP backend. +.PP +See rsync.net's documentation of rclone +examples (https://www.rsync.net/products/rclone.html). .SS Union .PP The \f[C]union\f[] remote provides a unification similar to UnionFS @@ -20758,58 +22599,10 @@ n/s/q>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Alias\ for\ an\ existing\ remote -\ \ \ \\\ "alias" -\ 2\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 3\ /\ Amazon\ S3\ Compliant\ Storage\ Providers\ (AWS,\ Ceph,\ Dreamhost,\ IBM\ COS,\ Minio) -\ \ \ \\\ "s3" -\ 4\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 5\ /\ Box -\ \ \ \\\ "box" -\ 6\ /\ Builds\ a\ stackable\ unification\ remote,\ which\ can\ appear\ to\ merge\ the\ contents\ of\ several\ remotes +[snip] +XX\ /\ Union\ merges\ the\ contents\ of\ several\ remotes \ \ \ \\\ "union" -\ 7\ /\ Cache\ a\ remote -\ \ \ \\\ "cache" -\ 8\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 9\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -10\ /\ FTP\ Connection -\ \ \ \\\ "ftp" -11\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -12\ /\ Google\ Drive -\ \ \ \\\ "drive" -13\ /\ Hubic -\ \ \ \\\ "hubic" -14\ /\ JottaCloud -\ \ \ \\\ "jottacloud" -15\ /\ Local\ Disk -\ \ \ \\\ "local" -16\ /\ Mega -\ \ \ \\\ "mega" -17\ /\ Microsoft\ Azure\ Blob\ Storage -\ \ \ \\\ "azureblob" -18\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -19\ /\ OpenDrive -\ \ \ \\\ "opendrive" -20\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -21\ /\ Pcloud -\ \ \ \\\ "pcloud" -22\ /\ QingCloud\ Object\ Storage -\ \ \ \\\ "qingstor" -23\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -24\ /\ Webdav -\ \ \ \\\ "webdav" -25\ /\ Yandex\ Disk -\ \ \ \\\ "yandex" -26\ /\ http\ Connection -\ \ \ \\\ "http" +[snip] Storage>\ union List\ of\ space\ separated\ remotes. Can\ be\ \[aq]remotea:test/dir\ remoteb:\[aq],\ \[aq]"remotea:test/space\ dir"\ remoteb:\[aq],\ etc. @@ -20873,8 +22666,8 @@ rclone\ copy\ C:\\source\ remote:source .fi .SS Standard Options .PP -Here are the standard options specific to union (A stackable unification -remote, which can appear to merge the contents of several remotes). +Here are the standard options specific to union (Union merges the +contents of several remotes). .SS \[en]union\-remotes .PP List of space separated remotes. @@ -20923,7 +22716,7 @@ name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value [snip] -22\ /\ Webdav +XX\ /\ Webdav \ \ \ \\\ "webdav" [snip] Storage>\ webdav @@ -20955,7 +22748,7 @@ password: Confirm\ the\ password: password: Bearer\ token\ instead\ of\ user/pass\ (eg\ a\ Macaroon) -bearer_token>\ +bearer_token> Remote\ config \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- [remote] @@ -20964,7 +22757,7 @@ url\ =\ https://example.com/remote.php/webdav/ vendor\ =\ nextcloud user\ =\ user pass\ =\ ***\ ENCRYPTED\ *** -bearer_token\ =\ +bearer_token\ = \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- y)\ Yes\ this\ is\ OK e)\ Edit\ this\ remote @@ -21105,6 +22898,20 @@ Env Var: RCLONE_WEBDAV_BEARER_TOKEN Type: string .IP \[bu] 2 Default: "" +.SS Advanced Options +.PP +Here are the advanced options specific to webdav (Webdav). +.SS \[en]webdav\-bearer\-token\-command +.PP +Command to run to get a bearer token +.IP \[bu] 2 +Config: bearer_token_command +.IP \[bu] 2 +Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: "" .SS Provider notes .PP See below for notes on specific providers. @@ -21124,34 +22931,6 @@ whereas Owncloud does. This may be fixed (https://github.com/nextcloud/nextcloud-snap/issues/365) in the future. -.SS Put.io -.PP -put.io can be accessed in a read only way using webdav. -.PP -Configure the \f[C]url\f[] as \f[C]https://webdav.put.io\f[] and use -your normal account username and password for \f[C]user\f[] and -\f[C]pass\f[]. -Set the \f[C]vendor\f[] to \f[C]other\f[]. -.PP -Your config file should end up looking like this: -.IP -.nf -\f[C] -[putio] -type\ =\ webdav -url\ =\ https://webdav.put.io -vendor\ =\ other -user\ =\ YourUserName -pass\ =\ encryptedpassword -\f[] -.fi -.PP -If you are using \f[C]put.io\f[] with \f[C]rclone\ mount\f[] then use -the \f[C]\-\-read\-only\f[] flag to signal to the OS that it can't write -to the mount. -.PP -For more help see the put.io webdav -docs (http://help.put.io/apps-and-integrations/ftp-and-webdav). .SS Sharepoint .PP Rclone can be used with Sharepoint provided by OneDrive for Business or @@ -21199,10 +22978,13 @@ pass\ =\ encryptedpassword .fi .SS dCache .PP -dCache (https://www.dcache.org/) is a storage system with WebDAV doors -that support, beside basic and x509, authentication with +dCache is a storage system that supports many protocols and +authentication/authorisation schemes. +For WebDAV clients, it allows users to authenticate with username and +password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons (https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) -(bearer tokens). +and OpenID\-Connect (https://en.wikipedia.org/wiki/OpenID_Connect) +access tokens. .PP Configure as normal using the \f[C]other\f[] type. Don't enter a username or password, instead enter your Macaroon as the @@ -21226,6 +23008,62 @@ There is a script (https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. +.PP +Macaroons may also be obtained from the dCacheView +web\-browser/JavaScript client that comes with dCache. +.SS OpenID\-Connect +.PP +dCache also supports authenticating with OpenID\-Connect access tokens. +OpenID\-Connect is a protocol (based on OAuth 2.0) that allows services +to identify users who have authenticated with some central service. +.PP +Support for OpenID\-Connect in rclone is currently achieved using +another software package called +oidc\-agent (https://github.com/indigo-dc/oidc-agent). +This is a command\-line tool that facilitates obtaining an access token. +Once installed and configured, an access token is obtained by running +the \f[C]oidc\-token\f[] command. +The following example shows a (shortened) access token obtained from the +\f[I]XDC\f[] OIDC Provider. +.IP +.nf +\f[C] +paul\@celebrimbor:~$\ oidc\-token\ XDC +eyJraWQ[...]QFXDt0 +paul\@celebrimbor:~$ +\f[] +.fi +.PP +\f[B]Note\f[] Before the \f[C]oidc\-token\f[] command will work, the +refresh token must be loaded into the oidc agent. +This is done with the \f[C]oidc\-add\f[] command (e.g., +\f[C]oidc\-add\ XDC\f[]). +This is typically done once per login session. +Full details on this and how to register oidc\-agent with your OIDC +Provider are provided in the oidc\-agent +documentation (https://indigo-dc.gitbooks.io/oidc-agent/). +.PP +The rclone \f[C]bearer_token_command\f[] configuration option is used to +fetch the access token from oidc\-agent. +.PP +Configure as a normal WebDAV endpoint, using the `other' vendor, leaving +the username and password empty. +When prompted, choose to edit the advanced config and enter the command +to get a bearer token (e.g., \f[C]oidc\-agent\ XDC\f[]). +.PP +The following example config shows a WebDAV endpoint that uses +oidc\-agent to supply an access token from the \f[I]XDC\f[] OIDC +Provider. +.IP +.nf +\f[C] +[dcache] +type\ =\ webdav +url\ =\ https://dcache.example.org/ +vendor\ =\ other +bearer_token_command\ =\ oidc\-token\ XDC +\f[] +.fi .SS Yandex Disk .PP Yandex Disk (https://disk.yandex.com) is a cloud storage solution @@ -21254,33 +23092,11 @@ n/s>\ n name>\ remote Type\ of\ storage\ to\ configure. Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value -\ 1\ /\ Amazon\ Drive -\ \ \ \\\ "amazon\ cloud\ drive" -\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio) -\ \ \ \\\ "s3" -\ 3\ /\ Backblaze\ B2 -\ \ \ \\\ "b2" -\ 4\ /\ Dropbox -\ \ \ \\\ "dropbox" -\ 5\ /\ Encrypt/Decrypt\ a\ remote -\ \ \ \\\ "crypt" -\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive) -\ \ \ \\\ "google\ cloud\ storage" -\ 7\ /\ Google\ Drive -\ \ \ \\\ "drive" -\ 8\ /\ Hubic -\ \ \ \\\ "hubic" -\ 9\ /\ Local\ Disk -\ \ \ \\\ "local" -10\ /\ Microsoft\ OneDrive -\ \ \ \\\ "onedrive" -11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH) -\ \ \ \\\ "swift" -12\ /\ SSH/SFTP\ Connection -\ \ \ \\\ "sftp" -13\ /\ Yandex\ Disk +[snip] +XX\ /\ Yandex\ Disk \ \ \ \\\ "yandex" -Storage>\ 13 +[snip] +Storage>\ yandex Yandex\ Client\ Id\ \-\ leave\ blank\ normally. client_id> Yandex\ Client\ Secret\ \-\ leave\ blank\ normally. @@ -21801,7 +23617,365 @@ Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM Type: bool .IP \[bu] 2 Default: false +.SS \[en]local\-case\-sensitive +.PP +Force the filesystem to report itself as case sensitive. +.PP +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. +Use this flag to override the default choice. +.IP \[bu] 2 +Config: case_sensitive +.IP \[bu] 2 +Env Var: RCLONE_LOCAL_CASE_SENSITIVE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS \[en]local\-case\-insensitive +.PP +Force the filesystem to report itself as case insensitive +.PP +Normally the local backend declares itself as case insensitive on +Windows/macOS and case sensitive for everything else. +Use this flag to override the default choice. +.IP \[bu] 2 +Config: case_insensitive +.IP \[bu] 2 +Env Var: RCLONE_LOCAL_CASE_INSENSITIVE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SH Changelog +.SS v1.49.0 \- 2019\-08\-26 +.IP \[bu] 2 +New backends +.RS 2 +.IP \[bu] 2 +1fichier (https://rclone.org/fichier/) (Laura Hausmann) +.IP \[bu] 2 +Google Photos (/googlephotos) (Nick Craig\-Wood) +.IP \[bu] 2 +Putio (https://rclone.org/putio/) (Cenk Alti) +.IP \[bu] 2 +premiumize.me (https://rclone.org/premiumizeme/) (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +New Features +.RS 2 +.IP \[bu] 2 +Experimental web GUI (https://rclone.org/gui/) (Chaitanya Bankanhal) +.IP \[bu] 2 +Implement \f[C]\-\-compare\-dest\f[] & \f[C]\-\-copy\-dest\f[] +(yparitcher) +.IP \[bu] 2 +Implement \f[C]\-\-suffix\f[] without \f[C]\-\-backup\-dir\f[] for +backup to current dir (yparitcher) +.IP \[bu] 2 +Add \f[C]\-\-use\-json\-log\f[] for JSON logging (justinalin) +.IP \[bu] 2 +Add \f[C]config\ reconnect\f[], \f[C]config\ userinfo\f[] and +\f[C]config\ disconnect\f[] subcommands. +(Nick Craig\-Wood) +.IP \[bu] 2 +Add context propagation to rclone (Aleksandar Jankovic) +.IP \[bu] 2 +Reworking internal statistics interfaces so they work with rc jobs +(Aleksandar Jankovic) +.IP \[bu] 2 +Add Higher units for ETA (AbelThar) +.IP \[bu] 2 +Update rclone logos to new design (Andreas Chlupka) +.IP \[bu] 2 +hash: Add CRC\-32 support (Cenk Alti) +.IP \[bu] 2 +help showbackend: Fixed advanced option category when there are no +standard options (buengese) +.IP \[bu] 2 +ncdu: Display/Copy to Clipboard Current Path (Gary Kim) +.IP \[bu] 2 +operations: +.RS 2 +.IP \[bu] 2 +Run hashing operations in parallel (Nick Craig\-Wood) +.IP \[bu] 2 +Don't calculate checksums when using \f[C]\-\-ignore\-checksum\f[] (Nick +Craig\-Wood) +.IP \[bu] 2 +Check transfer hashes when using \f[C]\-\-size\-only\f[] mode (Nick +Craig\-Wood) +.IP \[bu] 2 +Disable multi thread copy for local to local copies (Nick Craig\-Wood) +.IP \[bu] 2 +Debug successful hashes as well as failures (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +rc +.RS 2 +.IP \[bu] 2 +Add ability to stop async jobs (Aleksandar Jankovic) +.IP \[bu] 2 +Return current settings if core/bwlimit called without parameters (Nick +Craig\-Wood) +.IP \[bu] 2 +Rclone\-WebUI integration with rclone (Chaitanya Bankanhal) +.IP \[bu] 2 +Added command line parameter to control the cross origin resource +sharing (CORS) in the rcd. +(Security Improvement) (Chaitanya Bankanhal) +.IP \[bu] 2 +Add anchor tags to the docs so links are consistent (Nick Craig\-Wood) +.IP \[bu] 2 +Remove _async key from input parameters after parsing so later +operations won't get confused (buengese) +.IP \[bu] 2 +Add call to clear stats (Aleksandar Jankovic) +.RE +.IP \[bu] 2 +rcd +.RS 2 +.IP \[bu] 2 +Auto\-login for web\-gui (Chaitanya Bankanhal) +.IP \[bu] 2 +Implement \f[C]\-\-baseurl\f[] for rcd and web\-gui (Chaitanya +Bankanhal) +.RE +.IP \[bu] 2 +serve dlna +.RS 2 +.IP \[bu] 2 +Only select interfaces which can multicast for SSDP (Nick Craig\-Wood) +.IP \[bu] 2 +Add more builtin mime types to cover standard audio/video (Nick +Craig\-Wood) +.IP \[bu] 2 +Fix missing mime types on Android causing missing videos (Nick +Craig\-Wood) +.RE +.IP \[bu] 2 +serve ftp +.RS 2 +.IP \[bu] 2 +Refactor to bring into line with other serve commands (Nick Craig\-Wood) +.IP \[bu] 2 +Implement \f[C]\-\-auth\-proxy\f[] (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +serve http: Implement \f[C]\-\-baseurl\f[] (Nick Craig\-Wood) +.IP \[bu] 2 +serve restic: Implement \f[C]\-\-baseurl\f[] (Nick Craig\-Wood) +.IP \[bu] 2 +serve sftp +.RS 2 +.IP \[bu] 2 +Implement auth proxy (Nick Craig\-Wood) +.IP \[bu] 2 +Fix detection of whether server is authorized (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +serve webdav +.RS 2 +.IP \[bu] 2 +Implement \f[C]\-\-baseurl\f[] (Nick Craig\-Wood) +.IP \[bu] 2 +Support \f[C]\-\-auth\-proxy\f[] (Nick Craig\-Wood) +.RE +.RE +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +Make \[lq]bad record MAC\[rq] a retriable error (Nick Craig\-Wood) +.IP \[bu] 2 +copyurl: Fix copying files that return HTTP errors (Nick Craig\-Wood) +.IP \[bu] 2 +march: Fix checking sub\-directories when using +\f[C]\-\-no\-traverse\f[] (buengese) +.IP \[bu] 2 +rc +.RS 2 +.IP \[bu] 2 +Fix unmarshalable http.AuthFn in options and put in test for +marshalability (Nick Craig\-Wood) +.IP \[bu] 2 +Move job expire flags to rc to fix initalization problem (Nick +Craig\-Wood) +.IP \[bu] 2 +Fix \f[C]\-\-loopback\f[] with rc/list and others (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +rcat: Fix slowdown on systems with multiple hashes (Nick Craig\-Wood) +.IP \[bu] 2 +rcd: Fix permissions problems on cache directory with web gui download +(Nick Craig\-Wood) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Default \f[C]\-\-deamon\-timout\f[] to 15 minutes on macOS and FreeBSD +(Nick Craig\-Wood) +.IP \[bu] 2 +Update docs to show mounting from root OK for bucket based (Nick +Craig\-Wood) +.IP \[bu] 2 +Remove nonseekable flag from write files (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Make write without cache more efficient (Nick Craig\-Wood) +.IP \[bu] 2 +Fix \f[C]\-\-vfs\-cache\-mode\ minimal\f[] and \f[C]writes\f[] ignoring +cached files (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Add \f[C]\-\-local\-case\-sensitive\f[] and +\f[C]\-\-local\-case\-insensitive\f[] (Nick Craig\-Wood) +.IP \[bu] 2 +Avoid polluting page cache when uploading local files to remote backends +(Michał Matczuk) +.IP \[bu] 2 +Don't calculate any hashes by default (Nick Craig\-Wood) +.IP \[bu] 2 +Fadvise run syscall on a dedicated go routine (Michał Matczuk) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +Azure Storage Emulator support (Sandeep) +.IP \[bu] 2 +Updated config help details to remove connection string references +(Sandeep) +.IP \[bu] 2 +Make all operations work from the root (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +Implement link sharing (yparitcher) +.IP \[bu] 2 +Enable server side copy to copy between buckets (Nick Craig\-Wood) +.IP \[bu] 2 +Make all operations work from the root (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Fix server side copy of big files (Nick Craig\-Wood) +.IP \[bu] 2 +Update API for teamdrive use (Nick Craig\-Wood) +.IP \[bu] 2 +Add error for purge with \f[C]\-\-drive\-trashed\-only\f[] (ginvine) +.RE +.IP \[bu] 2 +Fichier +.RS 2 +.IP \[bu] 2 +Make FolderID int and adjust related code (buengese) +.RE +.IP \[bu] 2 +Google Cloud Storage +.RS 2 +.IP \[bu] 2 +Reduce oauth scope requested as suggested by Google (Nick Craig\-Wood) +.IP \[bu] 2 +Make all operations work from the root (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +HTTP +.RS 2 +.IP \[bu] 2 +Add \f[C]\-\-http\-headers\f[] flag for setting arbitrary headers (Nick +Craig\-Wood) +.RE +.IP \[bu] 2 +Jottacloud +.RS 2 +.IP \[bu] 2 +Use new api for retrieving internal username (buengese) +.IP \[bu] 2 +Refactor configuration and minor cleanup (buengese) +.RE +.IP \[bu] 2 +Koofr +.RS 2 +.IP \[bu] 2 +Support setting modification times on Koofr backend. +(jaKa) +.RE +.IP \[bu] 2 +Opendrive +.RS 2 +.IP \[bu] 2 +Refactor to use existing lib/rest facilities for uploads (Nick +Craig\-Wood) +.RE +.IP \[bu] 2 +Qingstor +.RS 2 +.IP \[bu] 2 +Upgrade to v3 SDK and fix listing loop (Nick Craig\-Wood) +.IP \[bu] 2 +Make all operations work from the root (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Add INTELLIGENT_TIERING storage class (Matti Niemenmaa) +.IP \[bu] 2 +Make all operations work from the root (Nick Craig\-Wood) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Add missing interface check and fix About (Nick Craig\-Wood) +.IP \[bu] 2 +Completely ignore all modtime checks if SetModTime=false (Jon Fautley) +.IP \[bu] 2 +Support md5/sha1 with rsync.net (Nick Craig\-Wood) +.IP \[bu] 2 +Save the md5/sha1 command in use to the config file for efficiency (Nick +Craig\-Wood) +.IP \[bu] 2 +Opt\-in support for diffie\-hellman\-group\-exchange\-sha256 +diffie\-hellman\-group\-exchange\-sha1 (Yi FU) +.RE +.IP \[bu] 2 +Swift +.RS 2 +.IP \[bu] 2 +Use FixRangeOption to fix 0 length files via the VFS (Nick Craig\-Wood) +.IP \[bu] 2 +Fix upload when using no_chunk to return the correct size (Nick +Craig\-Wood) +.IP \[bu] 2 +Make all operations work from the root (Nick Craig\-Wood) +.IP \[bu] 2 +Fix segments leak during failed large file uploads. +(nguyenhuuluan434) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Add \f[C]\-\-webdav\-bearer\-token\-command\f[] (Nick Craig\-Wood) +.IP \[bu] 2 +Refresh token when it expires with +\f[C]\-\-webdav\-bearer\-token\-command\f[] (Nick Craig\-Wood) +.IP \[bu] 2 +Add docs for using bearer_token_command with oidc\-agent (Paul Millar) +.RE .SS v1.48.0 \- 2019\-06\-15 .IP \[bu] 2 New commands @@ -22736,15 +24910,16 @@ retries (Nick Craig\-Wood) Bug Fixes .RS 2 .IP \[bu] 2 -cmd: Make \[en]progress update the stats correctly at the end (Nick -Craig\-Wood) +cmd: Make \f[C]\-\-progress\f[] update the stats correctly at the end +(Nick Craig\-Wood) .IP \[bu] 2 config: Create config directory on save if it is missing (Nick Craig\-Wood) .IP \[bu] 2 dedupe: Check for existing filename before renaming a dupe file (ssaqua) .IP \[bu] 2 -move: Don't create directories with \[en]dry\-run (Nick Craig\-Wood) +move: Don't create directories with \f[C]\-\-dry\-run\f[] (Nick +Craig\-Wood) .IP \[bu] 2 operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig\-Wood) @@ -22869,7 +25044,8 @@ Show URL of backend help page when starting config (Nick Craig\-Wood) .IP \[bu] 2 stats: Long names now split in center (Joanna Marek) .IP \[bu] 2 -Add \[en]log\-format flag for more control over log output (dcpu) +Add \f[C]\-\-log\-format\f[] flag for more control over log output +(dcpu) .IP \[bu] 2 rc: Add support for OPTIONS and basic CORS (frenos) .IP \[bu] 2 @@ -22881,11 +25057,11 @@ Bug Fixes .IP \[bu] 2 Fix \-P not ending with a new line (Nick Craig\-Wood) .IP \[bu] 2 -config: don't create default config dir when user supplies \[en]config -(albertony) +config: don't create default config dir when user supplies +\f[C]\-\-config\f[] (albertony) .IP \[bu] 2 -Don't print non\-ASCII characters with \[en]progress on windows (Nick -Craig\-Wood) +Don't print non\-ASCII characters with \f[C]\-\-progress\f[] on windows +(Nick Craig\-Wood) .IP \[bu] 2 Correct logs for excluded items (ssaqua) .RE @@ -22957,7 +25133,7 @@ Fix handling of Windows network paths (Nick Craig\-Wood) Azure Blob .RS 2 .IP \[bu] 2 -Add \[en]azureblob\-list\-chunk parameter (Santiago Rodríguez) +Add \f[C]\-\-azureblob\-list\-chunk\f[] parameter (Santiago Rodríguez) .IP \[bu] 2 Implemented settier command support on azureblob remote. (sandeepkru) @@ -22976,8 +25152,8 @@ Implement link sharing. Drive .RS 2 .IP \[bu] 2 -Add \[en]drive\-import\-formats \- google docs can now be imported -(Fabian Möller) +Add \f[C]\-\-drive\-import\-formats\f[] \- google docs can now be +imported (Fabian Möller) .RS 2 .IP \[bu] 2 Rewrite mime type and extension handling (Fabian Möller) @@ -22991,8 +25167,8 @@ Add support for apps\-script to json export (Fabian Möller) Fix escaped chars in documents during list (Fabian Möller) .RE .IP \[bu] 2 -Add \[en]drive\-v2\-download\-min\-size a workaround for slow downloads -(Fabian Möller) +Add \f[C]\-\-drive\-v2\-download\-min\-size\f[] a workaround for slow +downloads (Fabian Möller) .IP \[bu] 2 Improve directory notifications in ChangeNotify (Fabian Möller) .IP \[bu] 2 @@ -23018,9 +25194,10 @@ Jottacloud .IP \[bu] 2 Minor improvement in quota info (omit if unlimited) (albertony) .IP \[bu] 2 -Add \[en]fast\-list support (albertony) +Add \f[C]\-\-fast\-list\f[] support (albertony) .IP \[bu] 2 -Add permanent delete support: \[en]jottacloud\-hard\-delete (albertony) +Add permanent delete support: \f[C]\-\-jottacloud\-hard\-delete\f[] +(albertony) .IP \[bu] 2 Add link sharing support (albertony) .IP \[bu] 2 @@ -23060,7 +25237,7 @@ Use custom pacer, to retry operations when reasonable (Craig Miskell) Use configured server\-side\-encryption and storace class options when calling CopyObject() (Paul Kohout) .IP \[bu] 2 -Make \[en]s3\-v2\-auth flag (Nick Craig\-Wood) +Make \f[C]\-\-s3\-v2\-auth\f[] flag (Nick Craig\-Wood) .IP \[bu] 2 Fix v2 auth on files with spaces (Nick Craig\-Wood) .RE @@ -23076,7 +25253,7 @@ Craig\-Wood) .IP \[bu] 2 Fix ChangeNotify to support multiple remotes (Fabian Möller) .IP \[bu] 2 -Fix \[en]backup\-dir on union backend (Nick Craig\-Wood) +Fix \f[C]\-\-backup\-dir\f[] on union backend (Nick Craig\-Wood) .RE .IP \[bu] 2 WebDAV @@ -23106,7 +25283,8 @@ Bug Fixes .IP \[bu] 2 ncdu: Return error instead of log.Fatal in Show (Fabian Möller) .IP \[bu] 2 -cmd: Fix crash with \[en]progress and \[en]stats 0 (Nick Craig\-Wood) +cmd: Fix crash with \f[C]\-\-progress\f[] and \f[C]\-\-stats\ 0\f[] +(Nick Craig\-Wood) .IP \[bu] 2 docs: Tidy website display (Anagh Kumar Baranwal) .RE @@ -26427,28 +28605,42 @@ Project named rclone .SS v0.00 \- 2012\-11\-18 .IP \[bu] 2 Project started -.SS Bugs and Limitations -.SS Empty directories are left behind / not created -.PP -With remotes that have a concept of directory, eg Local and Drive, empty -directories may be left behind, or not created when one was expected. -.PP -This is because rclone doesn't have a concept of a directory \- it only -works on objects. -Most of the object storage systems can't actually store a directory so -there is nowhere for rclone to store anything about directories. -.PP -You can work round this to some extent with the\f[C]purge\f[] command -which will delete everything under the path, \f[B]inluding\f[] empty -directories. -.PP -This may be fixed at some point in Issue -#100 (https://github.com/rclone/rclone/issues/100) +.SH Bugs and Limitations +.SS Limitations .SS Directory timestamps aren't preserved .PP -For the same reason as the above, rclone doesn't have a concept of a -directory \- it only works on objects, therefore it can't preserve the -timestamps of directories. +Rclone doesn't currently preserve the timestamps of directories. +This is because rclone only really considers objects when syncing. +.SS Rclone struggles with millions of files in a directory +.PP +Currently rclone loads each directory entirely into memory before using +it. +Since each Rclone object takes 0.5k\-1k of memory this can take a very +long time and use an extremely large amount of memory. +.PP +Millions of files in a directory tend caused by software writing cloud +storage (eg S3 buckets). +.SS Bucket based remotes and folders +.PP +Bucket based remotes (eg S3/GCS/Swift/B2) do not have a concept of +directories. +Rclone therefore cannot create directories in them which means that +empty directories on a bucket based remote will tend to disappear. +.PP +Some software creates empty keys ending in \f[C]/\f[] as directory +markers. +Rclone doesn't do this as it potentially creates more objects and costs +more. +It may do in future (probably with a flag). +.SS Bugs +.PP +Bugs are stored in rclone's Github project: +.IP \[bu] 2 +Reported +bugs (https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug) +.IP \[bu] 2 +Known +issues (https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22) .SS Frequently Asked Questions .SS Do all cloud storage systems support all rclone commands .PP @@ -26699,7 +28891,7 @@ COPYING file included with the source code). .IP .nf \f[C] -Copyright\ (C)\ 2012\ by\ Nick\ Craig\-Wood\ https://www.craig\-wood.com/nick/ +Copyright\ (C)\ 2019\ by\ Nick\ Craig\-Wood\ https://www.craig\-wood.com/nick/ Permission\ is\ hereby\ granted,\ free\ of\ charge,\ to\ any\ person\ obtaining\ a\ copy of\ this\ software\ and\ associated\ documentation\ files\ (the\ "Software"),\ to\ deal @@ -27241,6 +29433,46 @@ forgems Florian Apolloner .IP \[bu] 2 Aleksandar Jankovic +.IP \[bu] 2 +Maran +.IP \[bu] 2 +nguyenhuuluan434 +.IP \[bu] 2 +Laura Hausmann +.IP \[bu] 2 +yparitcher +.IP \[bu] 2 +AbelThar +.IP \[bu] 2 +Matti Niemenmaa +.IP \[bu] 2 +Russell Davis +.IP \[bu] 2 +Yi FU +.IP \[bu] 2 +Paul Millar +.IP \[bu] 2 +justinalin +.IP \[bu] 2 +EliEron +.IP \[bu] 2 +justina777 +.IP \[bu] 2 +Chaitanya Bankanhal +.IP \[bu] 2 +Michał Matczuk +.IP \[bu] 2 +Macavirus +.IP \[bu] 2 +Abhinav Sharma +.IP \[bu] 2 +ginvine <34869051+ginvine@users.noreply.github.com> +.IP \[bu] 2 +Patrick Wang +.IP \[bu] 2 +Cenk Alti +.IP \[bu] 2 +Andreas Chlupka .SH Contact the rclone project .SS Forum .PP @@ -27262,6 +29494,8 @@ You can also follow me on twitter for rclone announcements: .SS Email .PP Or if all else fails or you want to ask something private or -confidential email Nick Craig\-Wood (mailto:nick@craig-wood.com) +confidential email Nick Craig\-Wood (mailto:nick@craig-wood.com). +Please don't email me requests for help \- those are better directed to +the forum \- thanks! .SH AUTHORS Nick Craig\-Wood.