diff --git a/MANUAL.html b/MANUAL.html index 008d79094..19d38aa09 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@

Rclone

Logo

@@ -31,6 +31,7 @@
  • Google Drive
  • HTTP
  • Hubic
  • +
  • IBM COS S3
  • Memset Memstore
  • Microsoft Azure Blob Storage
  • Microsoft OneDrive
  • @@ -82,7 +83,7 @@

    See below for some expanded Linux / macOS instructions.

    See the Usage section of the docs for how to use rclone, or run rclone -h.

    Script installation

    -

    To install rclone on Linux/MacOs/BSD systems, run:

    +

    To install rclone on Linux/macOS/BSD systems, run:

    curl https://rclone.org/install.sh | sudo bash

    For beta installation, run:

    curl https://rclone.org/install.sh | sudo bash -s beta
    @@ -136,6 +137,7 @@ sudo mv rclone /usr/local/bin/
    rclone config

    See the following for detailed instructions for

    Environment Variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    -

    Options

    +

    Options

    Every option in rclone can have its default set by environment variable.

    To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

    For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

    @@ -1534,14 +1795,19 @@ file2.avi

    Then use as --filter-from filter-file.txt. The rules are processed in the order that they are defined.

    This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. It will also include everything in the directory dir at the root of the sync, except dir/Trash which it will exclude. Everything else will be excluded from the sync.

    --files-from - Read list of source-file names

    -

    This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.

    +

    This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.

    This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.

    -

    Prepare a file like this files-from.txt

    +

    Paths within the --files-from file will be interpreted as starting with the root specified in the command. Leading / characters are ignored.

    +

    For example, suppose you had files-from.txt with this content:

    # comment
     file1.jpg
    -file2.jpg
    -

    Then use as --files-from files-from.txt. This will only transfer file1.jpg and file2.jpg providing they exist.

    -

    For example, let's say you had a few files you want to back up regularly with these absolute paths:

    +subdir/file2.jpg +

    You could then use it like this:

    +
    rclone copy --files-from files-from.txt /home/me/pics remote:pics
    +

    This will transfer these files only (if they exist)

    +
    /home/me/pics/file1.jpg        → remote:pics/file1.jpg
    +/home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
    +

    To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths:

    /home/user1/important
     /home/user1/dir/file
     /home/user2/stuff
    @@ -1551,14 +1817,20 @@ user1/dir/file user2/stuff

    You could then copy these to a remote like this

    rclone copy --files-from files-from.txt /home remote:backup
    -

    The 3 files will arrive in remote:backup with the paths as in the files-from.txt.

    +

    The 3 files will arrive in remote:backup with the paths as in the files-from.txt like this:

    +
    /home/user1/important → remote:backup/user1/important
    +/home/user1/dir/file  → remote:backup/user1/dir/file
    +/home/user2/stuff     → remote:backup/stuff

    You could of course choose / as the root too in which case your files-from.txt might look like this.

    /home/user1/important
     /home/user1/dir/file
     /home/user2/stuff

    And you would transfer it like this

    rclone copy --files-from files-from.txt / remote:backup
    -

    In this case there will be an extra home directory on the remote.

    +

    In this case there will be an extra home directory on the remote:

    +
    /home/user1/important → remote:home/backup/user1/important
    +/home/user1/dir/file  → remote:home/backup/user1/dir/file
    +/home/user2/stuff     → remote:home/backup/stuff

    --min-size - Don't transfer any file smaller than this

    This option controls the minimum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.

    For example --min-size 50k means no files smaller than 50kByte will be transferred.

    @@ -1615,6 +1887,125 @@ dir1/dir2/dir3/.ignore

    You can exclude dir3 from sync by running the following command:

    rclone sync --exclude-if-present .ignore dir1 remote:backup

    Currently only one filename is supported, i.e. --exclude-if-present should not be used multiple times.

    +

    Remote controlling rclone

    +

    If rclone is run with the --rc flag then it starts an http server which can be used to remote control rclone.

    +

    NB this is experimental and everything here is subject to change!

    +

    Supported parameters

    +

    --rc

    +

    Flag to start the http server listen on remote requests

    +

    --rc-addr=IP

    +

    IPaddress:Port or :Port to bind server to. (default "localhost:5572")

    +

    --rc-cert=KEY

    +

    SSL PEM key (concatenation of certificate and CA certificate)

    +

    --rc-client-ca=PATH

    +

    Client certificate authority to verify clients with

    +

    --rc-htpasswd=PATH

    +

    htpasswd file - if not provided no authentication is done

    +

    --rc-key=PATH

    +

    SSL PEM Private key

    +

    --rc-max-header-bytes=VALUE

    +

    Maximum size of request header (default 4096)

    +

    --rc-user=VALUE

    +

    User name for authentication.

    +

    --rc-pass=VALUE

    +

    Password for authentication.

    +

    --rc-realm=VALUE

    +

    Realm for authentication (default "rclone")

    +

    --rc-server-read-timeout=DURATION

    +

    Timeout for server reading data (default 1h0m0s)

    +

    --rc-server-write-timeout=DURATION

    +

    Timeout for server writing data (default 1h0m0s)

    +

    Accessing the remote control via the rclone rc command

    +

    Rclone itself implements the remote control protocol in its rclone rc command.

    +

    You can use it like this

    +
    $ rclone rc rc/noop param1=one param2=two
    +{
    +    "param1": "one",
    +    "param2": "two"
    +}
    +

    Run rclone rc on its own to see the help for the installed remote control commands.

    +

    Supported commands

    +

    core/bwlimit: Set the bandwidth limit.

    +

    This sets the bandwidth limit to that passed in.

    +

    Eg

    +
    rclone core/bwlimit rate=1M
    +rclone core/bwlimit rate=off
    +

    cache/expire: Purge a remote from cache

    +

    Purge a remote from the cache backend. Supports either a directory or a file. Params:

    + +

    vfs/forget: Forget files or directories in the directory cache.

    +

    This forgets the paths in the directory cache causing them to be re-read from the remote when needed.

    +

    If no paths are passed in then it will forget all the paths in the directory cache.

    +
    rclone rc vfs/forget
    +

    Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg

    +
    rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
    +

    rc/noop: Echo the input to the output parameters

    +

    This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

    +

    rc/error: This returns an error

    +

    This returns an error with the input as part of its error string. Useful for testing error handling.

    +

    rc/list: List all the registered remote control commands

    +

    This lists all the registered remote control commands as a JSON map in the commands response.

    +

    Accessing the remote control via HTTP

    +

    Rclone implements a simple HTTP based protocol.

    +

    Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.

    +

    All calls must made using POST.

    +

    The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl.

    +

    The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable.

    +

    If an error occurs then there will be an HTTP error status (usually 400) and the body of the response will contain a JSON encoded error object.

    +

    Using POST with URL parameters only

    +
    curl -X POST 'http://localhost:5572/rc/noop/?potato=1&sausage=2'
    +

    Response

    +
    {
    +    "potato": "1",
    +    "sausage": "2"
    +}
    +

    Here is what an error response looks like:

    +
    curl -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
    +
    {
    +    "error": "arbitrary error on input map[potato:1 sausage:2]",
    +    "input": {
    +        "potato": "1",
    +        "sausage": "2"
    +    }
    +}
    +

    Note that curl doesn't return errors to the shell unless you use the -f option

    +
    $ curl -f -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
    +curl: (22) The requested URL returned error: 400 Bad Request
    +$ echo $?
    +22
    +

    Using POST with a form

    +
    curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop/
    +

    Response

    +
    {
    +    "potato": "1",
    +    "sausage": "2"
    +}
    +

    Note that you can combine these with URL parameters too with the POST parameters taking precedence.

    +
    curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop/?rutabaga=3&sausage=4"
    +

    Response

    +
    {
    +    "potato": "1",
    +    "rutabaga": "3",
    +    "sausage": "4"
    +}
    +
    +

    Using POST with a JSON blob

    +
    curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop/
    +

    response

    +
    {
    +    "password": "xyz",
    +    "username": "xyz"
    +}
    +

    This can be combined with URL parameters too if required. The JSON blob takes precedence.

    +
    curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop/?rutabaga=3&potato=4'
    +
    {
    +    "potato": 2,
    +    "rutabaga": "3",
    +    "sausage": 1
    +}

    Overview of cloud storage systems

    Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.

    Features

    @@ -2039,6 +2430,101 @@ dir1/dir2/dir3/.ignore

    The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs for more details.

    StreamUpload

    Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat.

    +

    Alias

    +

    The alias remote provides a new name for another remote.

    +

    Paths may be as deep as required or a local path, eg remote:directory/subdirectory or /directory/subdirectory.

    +

    During the initial setup with rclone config you will specify the target remote. The target remote can either be a local path or another remote.

    +

    Subfolders can be used in target remote. Asume a alias remote named backup with the target mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

    +

    There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop. The empty path is not allowed as a remote. To alias the current directory use . instead.

    +

    Here is an example of how to make a alias called remote for local folder. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    + 1 / Alias for a existing remote
    +   \ "alias"
    + 2 / Amazon Drive
    +   \ "amazon cloud drive"
    + 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
    +   \ "s3"
    + 4 / Backblaze B2
    +   \ "b2"
    + 5 / Box
    +   \ "box"
    + 6 / Cache a remote
    +   \ "cache"
    + 7 / Dropbox
    +   \ "dropbox"
    + 8 / Encrypt/Decrypt a remote
    +   \ "crypt"
    + 9 / FTP Connection
    +   \ "ftp"
    +10 / Google Cloud Storage (this is not Google Drive)
    +   \ "google cloud storage"
    +11 / Google Drive
    +   \ "drive"
    +12 / Hubic
    +   \ "hubic"
    +13 / Local Disk
    +   \ "local"
    +14 / Microsoft Azure Blob Storage
    +   \ "azureblob"
    +15 / Microsoft OneDrive
    +   \ "onedrive"
    +16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    +   \ "swift"
    +17 / Pcloud
    +   \ "pcloud"
    +18 / QingCloud Object Storage
    +   \ "qingstor"
    +19 / SSH/SFTP Connection
    +   \ "sftp"
    +20 / Webdav
    +   \ "webdav"
    +21 / Yandex Disk
    +   \ "yandex"
    +22 / http Connection
    +   \ "http"
    +Storage> 1
    +Remote or path to alias.
    +Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
    +remote> /mnt/storage/backup
    +Remote config
    +--------------------
    +[remote]
    +remote = /mnt/storage/backup
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +Current remotes:
    +
    +Name                 Type
    +====                 ====
    +remote               alias
    +
    +e) Edit existing remote
    +n) New remote
    +d) Delete remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/r/c/s/q> q
    +

    Once configured you can then use rclone like this,

    +

    List directories in top level in /mnt/storage/backup

    +
    rclone lsd remote:
    +

    List all the files in /mnt/storage/backup

    +
    rclone ls remote:
    +

    Copy another local directory to the alias directory called source

    +
    rclone copy /home/source remote:source

    Amazon Drive

    Paths are specified as remote:path

    Paths may be as deep as required, eg remote:directory/subdirectory.

    @@ -2161,37 +2647,23 @@ y/e/d> y
    No remotes found - make a new one
     n) New remote
     s) Set configuration password
    -n/s> n
    +q) Quit config
    +n/s/q> n
     name> remote
     Type of storage to configure.
     Choose a number from below, or type in your own value
    - 1 / Amazon Drive
    + 1 / Alias for a existing remote
    +   \ "alias"
    + 2 / Amazon Drive
        \ "amazon cloud drive"
    - 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
    + 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
        \ "s3"
    - 3 / Backblaze B2
    + 4 / Backblaze B2
        \ "b2"
    - 4 / Dropbox
    -   \ "dropbox"
    - 5 / Encrypt/Decrypt a remote
    -   \ "crypt"
    - 6 / Google Cloud Storage (this is not Google Drive)
    -   \ "google cloud storage"
    - 7 / Google Drive
    -   \ "drive"
    - 8 / Hubic
    -   \ "hubic"
    - 9 / Local Disk
    -   \ "local"
    -10 / Microsoft OneDrive
    -   \ "onedrive"
    -11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    -   \ "swift"
    -12 / SSH/SFTP Connection
    -   \ "sftp"
    -13 / Yandex Disk
    -   \ "yandex"
    -Storage> 2
    +[snip]
    +23 / http Connection
    +   \ "http"
    +Storage> s3
     Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
     Choose a number from below, or type in your own value
      1 / Enter AWS credentials in the next step
    @@ -2200,80 +2672,91 @@ Choose a number from below, or type in your own value
        \ "true"
     env_auth> 1
     AWS Access Key ID - leave blank for anonymous access or runtime credentials.
    -access_key_id> access_key
    +access_key_id> XXX
     AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    -secret_access_key> secret_key
    -Region to connect to.
    +secret_access_key> YYY
    +Region to connect to.  Leave blank if you are using an S3 clone and you don't have a region.
     Choose a number from below, or type in your own value
        / The default endpoint - a good choice if you are unsure.
      1 | US Region, Northern Virginia or Pacific Northwest.
        | Leave location constraint empty.
        \ "us-east-1"
    +   / US East (Ohio) Region
    + 2 | Needs location constraint us-east-2.
    +   \ "us-east-2"
        / US West (Oregon) Region
    - 2 | Needs location constraint us-west-2.
    + 3 | Needs location constraint us-west-2.
        \ "us-west-2"
        / US West (Northern California) Region
    - 3 | Needs location constraint us-west-1.
    + 4 | Needs location constraint us-west-1.
        \ "us-west-1"
    -   / EU (Ireland) Region Region
    - 4 | Needs location constraint EU or eu-west-1.
    +   / Canada (Central) Region
    + 5 | Needs location constraint ca-central-1.
    +   \ "ca-central-1"
    +   / EU (Ireland) Region
    + 6 | Needs location constraint EU or eu-west-1.
        \ "eu-west-1"
    +   / EU (London) Region
    + 7 | Needs location constraint eu-west-2.
    +   \ "eu-west-2"
        / EU (Frankfurt) Region
    - 5 | Needs location constraint eu-central-1.
    + 8 | Needs location constraint eu-central-1.
        \ "eu-central-1"
        / Asia Pacific (Singapore) Region
    - 6 | Needs location constraint ap-southeast-1.
    + 9 | Needs location constraint ap-southeast-1.
        \ "ap-southeast-1"
        / Asia Pacific (Sydney) Region
    - 7 | Needs location constraint ap-southeast-2.
    +10 | Needs location constraint ap-southeast-2.
        \ "ap-southeast-2"
        / Asia Pacific (Tokyo) Region
    - 8 | Needs location constraint ap-northeast-1.
    +11 | Needs location constraint ap-northeast-1.
        \ "ap-northeast-1"
        / Asia Pacific (Seoul)
    - 9 | Needs location constraint ap-northeast-2.
    +12 | Needs location constraint ap-northeast-2.
        \ "ap-northeast-2"
        / Asia Pacific (Mumbai)
    -10 | Needs location constraint ap-south-1.
    +13 | Needs location constraint ap-south-1.
        \ "ap-south-1"
        / South America (Sao Paulo) Region
    -11 | Needs location constraint sa-east-1.
    +14 | Needs location constraint sa-east-1.
        \ "sa-east-1"
    -   / If using an S3 clone that only understands v2 signatures
    -12 | eg Ceph/Dreamhost
    -   | set this and make sure you set the endpoint.
    +   / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
    +15 | Set this and make sure you set the endpoint.
        \ "other-v2-signature"
    -   / If using an S3 clone that understands v4 signatures set this
    -13 | and make sure you set the endpoint.
    -   \ "other-v4-signature"
     region> 1
     Endpoint for S3 API.
     Leave blank if using AWS to use the default endpoint for the region.
     Specify if using an S3 clone such as Ceph.
    -endpoint>
    +endpoint> 
     Location constraint - must be set to match the Region. Used when creating buckets only.
     Choose a number from below, or type in your own value
      1 / Empty for US Region, Northern Virginia or Pacific Northwest.
        \ ""
    - 2 / US West (Oregon) Region.
    + 2 / US East (Ohio) Region.
    +   \ "us-east-2"
    + 3 / US West (Oregon) Region.
        \ "us-west-2"
    - 3 / US West (Northern California) Region.
    + 4 / US West (Northern California) Region.
        \ "us-west-1"
    - 4 / EU (Ireland) Region.
    + 5 / Canada (Central) Region.
    +   \ "ca-central-1"
    + 6 / EU (Ireland) Region.
        \ "eu-west-1"
    - 5 / EU Region.
    + 7 / EU (London) Region.
    +   \ "eu-west-2"
    + 8 / EU Region.
        \ "EU"
    - 6 / Asia Pacific (Singapore) Region.
    + 9 / Asia Pacific (Singapore) Region.
        \ "ap-southeast-1"
    - 7 / Asia Pacific (Sydney) Region.
    +10 / Asia Pacific (Sydney) Region.
        \ "ap-southeast-2"
    - 8 / Asia Pacific (Tokyo) Region.
    +11 / Asia Pacific (Tokyo) Region.
        \ "ap-northeast-1"
    - 9 / Asia Pacific (Seoul)
    +12 / Asia Pacific (Seoul)
        \ "ap-northeast-2"
    -10 / Asia Pacific (Mumbai)
    +13 / Asia Pacific (Mumbai)
        \ "ap-south-1"
    -11 / South America (Sao Paulo) Region.
    +14 / South America (Sao Paulo) Region.
        \ "sa-east-1"
     location_constraint> 1
     Canned ACL used when creating buckets and/or storing objects in S3.
    @@ -2294,14 +2777,14 @@ Choose a number from below, or type in your own value
        / Both the object owner and the bucket owner get FULL_CONTROL over the object.
      6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
        \ "bucket-owner-full-control"
    -acl> private
    +acl> 1
     The server-side encryption algorithm used when storing this object in S3.
     Choose a number from below, or type in your own value
      1 / None
        \ ""
      2 / AES256
        \ "AES256"
    -server_side_encryption>
    +server_side_encryption> 1
     The storage class to use when storing objects in S3.
     Choose a number from below, or type in your own value
      1 / Default
    @@ -2312,19 +2795,19 @@ Choose a number from below, or type in your own value
        \ "REDUCED_REDUNDANCY"
      4 / Standard Infrequent Access storage class
        \ "STANDARD_IA"
    -storage_class>
    +storage_class> 1
     Remote config
     --------------------
     [remote]
     env_auth = false
    -access_key_id = access_key
    -secret_access_key = secret_key
    +access_key_id = XXX
    +secret_access_key = YYY
     region = us-east-1
    -endpoint =
    -location_constraint =
    +endpoint = 
    +location_constraint = 
     acl = private
    -server_side_encryption =
    -storage_class =
    +server_side_encryption = 
    +storage_class = 
     --------------------
     y) Yes this is OK
     e) Edit this remote
    @@ -2344,10 +2827,10 @@ y/e/d> y

    Modified time

    The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    Multipart uploads

    -

    rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.

    +

    rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

    Buckets and Regions

    With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

    -

    Authentication

    +

    Authentication

    There are two ways to supply rclone with a set of AWS credentials. In order of precedence: