docs: auto generate backend options documentation

This inserts the output of "rclone help backend xxx" into the help
pages for each backend.
This commit is contained in:
Nick Craig-Wood 2018-10-01 20:48:54 +01:00
parent a9273c5da5
commit 78b9bd77f5
29 changed files with 2829 additions and 57 deletions

View File

@ -182,10 +182,14 @@ with modules beneath.
If you are adding a new feature then please update the documentation. If you are adding a new feature then please update the documentation.
If you add a new flag, then if it is a general flag, document it in If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in `docs/content/docs.md` - the flags there are supposed to be in
alphabetical order. If it is a remote specific flag, then document it alphabetical order.
in `docs/content/remote.md`.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. The first line of this is used
for the flag help, the remainder is shown to the user in `rclone
config` and is added to the docs with `make backenddocs`.
The only documentation you need to edit are the `docs/content/*.md` The only documentation you need to edit are the `docs/content/*.md`
files. The MANUAL.*, rclone.1, web site etc are all auto generated files. The MANUAL.*, rclone.1, web site etc are all auto generated
@ -355,7 +359,7 @@ See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in alphabetical order but with the local file system last. Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in alphabetical order but with the local file system last.
* `README.md` - main Github page * `README.md` - main Github page
* `docs/content/remote.md` - main docs page * `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* `docs/content/overview.md` - overview docs * `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section * `docs/content/docs.md` - list of remotes in config section
* `docs/content/about.md` - front page of rclone.org * `docs/content/about.md` - front page of rclone.org

View File

@ -107,7 +107,7 @@ doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
rclone.1: MANUAL.md rclone.1: MANUAL.md
pandoc -s --from markdown --to man MANUAL.md -o rclone.1 pandoc -s --from markdown --to man MANUAL.md -o rclone.1
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs
./bin/make_manual.py ./bin/make_manual.py
MANUAL.html: MANUAL.md MANUAL.html: MANUAL.md
@ -119,6 +119,9 @@ MANUAL.txt: MANUAL.md
commanddocs: rclone commanddocs: rclone
rclone gendocs docs/content/commands/ rclone gendocs docs/content/commands/
backenddocs: rclone bin/make_backend_docs.py
./bin/make_backend_docs.py
rcdocs: rclone rcdocs: rclone
bin/make_rc_docs.sh bin/make_rc_docs.sh

67
bin/make_backend_docs.py Executable file
View File

@ -0,0 +1,67 @@
#!/usr/bin/env python
"""
Make backend documentation
"""
import os
import subprocess
marker = "<!--- autogenerated options"
start = marker + " start"
stop = marker + " stop"
# directory name to backend name
dir_to_backend = {
"googlecloudstorage": "google cloud storage",
"amazonclouddrive": "amazon cloud drive",
}
def find_backends():
"""Return a list of all backends"""
return [ x for x in os.listdir("backend") if x not in ("all",) ]
def output_docs(backend, out):
"""Output documentation for backend options to out"""
backend = dir_to_backend.get(backend, backend)
out.flush()
subprocess.check_call(["rclone", "help", "backend", backend], stdout=out)
def alter_doc(backend):
"""Alter the documentation for backend"""
doc_file = "docs/content/"+backend+".md"
if not os.path.exists(doc_file):
raise ValueError("Didn't find doc file %s" % (doc_file,))
new_file = doc_file+"~new~"
altered = False
with open(doc_file, "r") as in_file, open(new_file, "w") as out_file:
in_docs = False
for line in in_file:
if not in_docs:
if start in line:
in_docs = True
start_full = start + " - DO NOT EDIT, instead edit fs.RegInfo in backend/%s/%s.go then run make backenddocs -->\n" % (backend, backend)
out_file.write(start_full)
output_docs(backend, out_file)
out_file.write(stop+" -->\n")
altered = True
if not in_docs:
out_file.write(line)
if in_docs:
if stop in line:
in_docs = False
os.rename(doc_file, doc_file+"~")
os.rename(new_file, doc_file)
if not altered:
raise ValueError("Didn't find '%s' markers for in %s" % (start, doc_file))
if __name__ == "__main__":
failed, success = 0, 0
for backend in find_backends():
try:
alter_doc(backend)
except Exception, e:
print "Failed adding docs for %s backend: %s" % (backend, e)
failed += 1
else:
success += 1
print "Added docs for %d backends with %d failures" % (success, failed)

View File

@ -128,5 +128,19 @@ Copy another local directory to the alias directory called source
rclone copy /home/source remote:source rclone copy /home/source remote:source
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to alias (Alias for a existing remote).
#### --alias-remote
Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
- Config: remote
- Env Var: RCLONE_ALIAS_REMOTE
- Type: string
- Default: ""
<!--- autogenerated options stop -->

View File

@ -173,8 +173,110 @@ Let's say you usually use `amazon.co.uk`. When you authenticate with
rclone it will take you to an `amazon.com` page to log in. Your rclone it will take you to an `amazon.com` page to log in. Your
`amazon.co.uk` email and password should work here just fine. `amazon.co.uk` email and password should work here just fine.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/amazonclouddrive/amazonclouddrive.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to amazon cloud drive (Amazon Drive).
#### --acd-client-id
Amazon Application Client ID.
- Config: client_id
- Env Var: RCLONE_ACD_CLIENT_ID
- Type: string
- Default: ""
#### --acd-client-secret
Amazon Application Client Secret.
- Config: client_secret
- Env Var: RCLONE_ACD_CLIENT_SECRET
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to amazon cloud drive (Amazon Drive).
#### --acd-auth-url
Auth server URL.
Leave blank to use Amazon's.
- Config: auth_url
- Env Var: RCLONE_ACD_AUTH_URL
- Type: string
- Default: ""
#### --acd-token-url
Token server url.
leave blank to use Amazon's.
- Config: token_url
- Env Var: RCLONE_ACD_TOKEN_URL
- Type: string
- Default: ""
#### --acd-checkpoint
Checkpoint for internal polling (debug).
- Config: checkpoint
- Env Var: RCLONE_ACD_CHECKPOINT
- Type: string
- Default: ""
#### --acd-upload-wait-per-gb
Additional time per GB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1GB in size and nearly every time for
files bigger than 10GB. This parameter controls the time rclone waits
for the file to appear.
The default value for this parameter is 3 minutes per GB, so by
default it will wait 3 minutes for every GB uploaded to see if the
file appears.
You can disable this feature by setting it to 0. This may cause
conflict errors as rclone retries the failed upload but the file will
most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads
of big files for a range of file sizes.
Upload with the "-v" flag to see more info about what rclone is doing
in this situation.
- Config: upload_wait_per_gb
- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
- Type: Duration
- Default: 3m0s
#### --acd-templink-threshold
Files >= this size will be downloaded via their tempLink.
Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink"
which downloads the file through a temporary URL directly from the
underlying S3 storage.
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- Type: SizeSuffix
- Default: 9G
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -168,8 +168,112 @@ upload which means that there is a limit of 9.5TB of multipart uploads
in progress as Azure won't allow more than that amount of uncommitted in progress as Azure won't allow more than that amount of uncommitted
blocks. blocks.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
#### --azureblob-account
Storage Account Name (leave blank to use connection string or SAS URL)
- Config: account
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
- Type: string
- Default: ""
#### --azureblob-key
Storage Account Key (leave blank to use connection string or SAS URL)
- Config: key
- Env Var: RCLONE_AZUREBLOB_KEY
- Type: string
- Default: ""
#### --azureblob-sas-url
SAS URL for container level access only
(leave blank if using account/key or connection string)
- Config: sas_url
- Env Var: RCLONE_AZUREBLOB_SAS_URL
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
#### --azureblob-endpoint
Endpoint for the service
Leave blank normally.
- Config: endpoint
- Env Var: RCLONE_AZUREBLOB_ENDPOINT
- Type: string
- Default: ""
#### --azureblob-upload-cutoff
Cutoff for switching to chunked upload (<= 256MB).
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 256M
#### --azureblob-chunk-size
Upload chunk size (<= 100MB).
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.
- Config: chunk_size
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- Type: SizeSuffix
- Default: 4M
#### --azureblob-list-chunk
Size of blob list.
This sets the number of blobs requested in each listing chunk. Default
is the maximum, 5000. "List blobs" requests are permitted 2 minutes
per megabyte to complete. If an operation is taking longer than 2
minutes per megabyte on average, it will time out (
[source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval)
). This can be used to limit the number of blobs items to return, to
avoid the time out.
- Config: list_chunk
- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
- Type: int
- Default: 5000
#### --azureblob-access-tier
Access tier of blob: hot, cool or archive.
Archived blobs can be restored by setting access tier to hot or
cool. Leave blank if you intend to use default access tier, which is
set at account level
If there is no "access tier" specified, rclone doesn't apply any tier.
rclone performs "Set Tier" operation on blobs while uploading, if objects
are not modified, specifying "access tier" to new one will have no effect.
If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
tiering blob to "Hot" or "Cool".
- Config: access_tier
- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
- Type: string
- Default: ""
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -294,6 +294,108 @@ server to the nearest millisecond appended to them.
Note that when using `--b2-versions` no file write operations are Note that when using `--b2-versions` no file write operations are
permitted, so you can't upload files or delete them. permitted, so you can't upload files or delete them.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/b2/b2.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to b2 (Backblaze B2).
#### --b2-account
Account ID or Application Key ID
- Config: account
- Env Var: RCLONE_B2_ACCOUNT
- Type: string
- Default: ""
#### --b2-key
Application Key
- Config: key
- Env Var: RCLONE_B2_KEY
- Type: string
- Default: ""
#### --b2-hard-delete
Permanently delete files on remote removal, otherwise hide files.
- Config: hard_delete
- Env Var: RCLONE_B2_HARD_DELETE
- Type: bool
- Default: false
### Advanced Options
Here are the advanced options specific to b2 (Backblaze B2).
#### --b2-endpoint
Endpoint for the service.
Leave blank normally.
- Config: endpoint
- Env Var: RCLONE_B2_ENDPOINT
- Type: string
- Default: ""
#### --b2-test-mode
A flag string for X-Bz-Test-Mode header for debugging.
This is for debugging purposes only. Setting it to one of the strings
below will cause b2 to return specific errors:
* "fail_some_uploads"
* "expire_some_account_authorization_tokens"
* "force_cap_exceeded"
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
- Config: test_mode
- Env Var: RCLONE_B2_TEST_MODE
- Type: string
- Default: ""
#### --b2-versions
Include old versions in directory listings.
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
- Config: versions
- Env Var: RCLONE_B2_VERSIONS
- Type: bool
- Default: false
#### --b2-upload-cutoff
Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 190.735M
#### --b2-chunk-size
Upload chunk size. Must fit in memory.
When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
"--transfers" chunks in progress at once. 5,000,000 Bytes is the
minimim size.
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
- Default: 96M
<!--- autogenerated options stop -->

View File

@ -217,8 +217,54 @@ normally 8MB so increasing `--transfers` will increase memory use.
Depending on the enterprise settings for your user, the item will Depending on the enterprise settings for your user, the item will
either be actually deleted from Box or moved to the trash. either be actually deleted from Box or moved to the trash.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/box/box.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to box (Box).
#### --box-client-id
Box App Client Id.
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_BOX_CLIENT_ID
- Type: string
- Default: ""
#### --box-client-secret
Box App Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_BOX_CLIENT_SECRET
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to box (Box).
#### --box-upload-cutoff
Cutoff for switching to multipart upload (>= 50MB).
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 50M
#### --box-commit-retries
Max number of times to try committing a multipart file.
- Config: commit_retries
- Env Var: RCLONE_BOX_COMMIT_RETRIES
- Type: int
- Default: 100
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -290,5 +290,315 @@ Params:
- **remote** = path to remote **(required)** - **remote** = path to remote **(required)**
- **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_ - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to cache (Cache a remote).
#### --cache-remote
Remote to cache.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
- Config: remote
- Env Var: RCLONE_CACHE_REMOTE
- Type: string
- Default: ""
#### --cache-plex-url
The URL of the Plex server
- Config: plex_url
- Env Var: RCLONE_CACHE_PLEX_URL
- Type: string
- Default: ""
#### --cache-plex-username
The username of the Plex user
- Config: plex_username
- Env Var: RCLONE_CACHE_PLEX_USERNAME
- Type: string
- Default: ""
#### --cache-plex-password
The password of the Plex user
- Config: plex_password
- Env Var: RCLONE_CACHE_PLEX_PASSWORD
- Type: string
- Default: ""
#### --cache-chunk-size
The size of a chunk (partial file data).
Use lower numbers for slower connections. If the chunk size is
changed, any downloaded chunks will be invalid and cache-chunk-path
will need to be cleared or unexpected EOF errors will occur.
- Config: chunk_size
- Env Var: RCLONE_CACHE_CHUNK_SIZE
- Type: SizeSuffix
- Default: 5M
- Examples:
- "1m"
- 1MB
- "5M"
- 5 MB
- "10M"
- 10 MB
#### --cache-info-age
How long to cache file structure information (directory listings, file size, times etc).
If all write operations are done through the cache then you can safely make
this value very large as the cache store will also be updated in real time.
- Config: info_age
- Env Var: RCLONE_CACHE_INFO_AGE
- Type: Duration
- Default: 6h0m0s
- Examples:
- "1h"
- 1 hour
- "24h"
- 24 hours
- "48h"
- 48 hours
#### --cache-chunk-total-size
The total size that the chunks can take up on the local disk.
If the cache exceeds this value then it will start to delete the
oldest chunks until it goes under this value.
- Config: chunk_total_size
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
- Type: SizeSuffix
- Default: 10G
- Examples:
- "500M"
- 500 MB
- "1G"
- 1 GB
- "10G"
- 10 GB
### Advanced Options
Here are the advanced options specific to cache (Cache a remote).
#### --cache-plex-token
The plex token for authentication - auto set normally
- Config: plex_token
- Env Var: RCLONE_CACHE_PLEX_TOKEN
- Type: string
- Default: ""
#### --cache-plex-insecure
Skip all certificate verifications when connecting to the Plex server
- Config: plex_insecure
- Env Var: RCLONE_CACHE_PLEX_INSECURE
- Type: string
- Default: ""
#### --cache-db-path
Directory to store file structure metadata DB.
The remote name is used as the DB file name.
- Config: db_path
- Env Var: RCLONE_CACHE_DB_PATH
- Type: string
- Default: "/home/ncw/.cache/rclone/cache-backend"
#### --cache-chunk-path
Directory to cache chunk files.
Path to where partial file data (chunks) are stored locally. The remote
name is appended to the final path.
This config follows the "--cache-db-path". If you specify a custom
location for "--cache-db-path" and don't specify one for "--cache-chunk-path"
then "--cache-chunk-path" will use the same path as "--cache-db-path".
- Config: chunk_path
- Env Var: RCLONE_CACHE_CHUNK_PATH
- Type: string
- Default: "/home/ncw/.cache/rclone/cache-backend"
#### --cache-db-purge
Clear all the cached data for this remote on start.
- Config: db_purge
- Env Var: RCLONE_CACHE_DB_PURGE
- Type: bool
- Default: false
#### --cache-chunk-clean-interval
How often should the cache perform cleanups of the chunk storage.
The default value should be ok for most people. If you find that the
cache goes over "cache-chunk-total-size" too often then try to lower
this value to force it to perform cleanups more often.
- Config: chunk_clean_interval
- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
- Type: Duration
- Default: 1m0s
#### --cache-read-retries
How many times to retry a read from a cache storage.
Since reading from a cache stream is independent from downloading file
data, readers can get to a point where there's no more data in the
cache. Most of the times this can indicate a connectivity issue if
cache isn't able to provide file data anymore.
For really slow connections, increase this to a point where the stream is
able to provide data but your experience will be very stuttering.
- Config: read_retries
- Env Var: RCLONE_CACHE_READ_RETRIES
- Type: int
- Default: 10
#### --cache-workers
How many workers should run in parallel to download chunks.
Higher values will mean more parallel processing (better CPU needed)
and more concurrent requests on the cloud provider. This impacts
several aspects like the cloud provider API limits, more stress on the
hardware that rclone runs on but it also means that streams will be
more fluid and data will be available much more faster to readers.
**Note**: If the optional Plex integration is enabled then this
setting will adapt to the type of reading performed and the value
specified here will be used as a maximum number of workers to use.
- Config: workers
- Env Var: RCLONE_CACHE_WORKERS
- Type: int
- Default: 4
#### --cache-chunk-no-memory
Disable the in-memory cache for storing chunks during streaming.
By default, cache will keep file data during streaming in RAM as well
to provide it to readers as fast as possible.
This transient data is evicted as soon as it is read and the number of
chunks stored doesn't exceed the number of workers. However, depending
on other settings like "cache-chunk-size" and "cache-workers" this footprint
can increase if there are parallel streams too (multiple files being read
at the same time).
If the hardware permits it, use this feature to provide an overall better
performance during streaming but it can also be disabled if RAM is not
available on the local machine.
- Config: chunk_no_memory
- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
- Type: bool
- Default: false
#### --cache-rps
Limits the number of requests per second to the source FS (-1 to disable)
This setting places a hard limit on the number of requests per second
that cache will be doing to the cloud provider remote and try to
respect that value by setting waits between reads.
If you find that you're getting banned or limited on the cloud
provider through cache and know that a smaller number of requests per
second will allow you to work with it then you can use this setting
for that.
A good balance of all the other settings should make this setting
useless but it is available to set for more special cases.
**NOTE**: This will limit the number of requests during streams but
other API calls to the cloud provider like directory listings will
still pass.
- Config: rps
- Env Var: RCLONE_CACHE_RPS
- Type: int
- Default: -1
#### --cache-writes
Cache file data on writes through the FS
If you need to read files immediately after you upload them through
cache you can enable this flag to have their data stored in the
cache store at the same time during upload.
- Config: writes
- Env Var: RCLONE_CACHE_WRITES
- Type: bool
- Default: false
#### --cache-tmp-upload-path
Directory to keep temporary files until they are uploaded.
This is the path where cache will use as a temporary storage for new
files that need to be uploaded to the cloud provider.
Specifying a value will enable this feature. Without it, it is
completely disabled and files will be uploaded directly to the cloud
provider
- Config: tmp_upload_path
- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
- Type: string
- Default: ""
#### --cache-tmp-wait-time
How long should files be stored in local cache before being uploaded
This is the duration that a file must wait in the temporary location
_cache-tmp-upload-path_ before it is selected for upload.
Note that only one file is uploaded at a time and it can take longer
to start the upload if a queue formed for this purpose.
- Config: tmp_wait_time
- Env Var: RCLONE_CACHE_TMP_WAIT_TIME
- Type: Duration
- Default: 15s
#### --cache-db-wait-time
How long to wait for the DB to be available - 0 is unlimited
Only one process can have the DB open at any one time, so rclone waits
for this duration for the DB to become available before it gives an
error.
If you set it to 0 then it will wait forever.
- Config: db_wait_time
- Env Var: RCLONE_CACHE_DB_WAIT_TIME
- Type: Duration
- Default: 1s
<!--- autogenerated options stop -->

View File

@ -294,8 +294,93 @@ Note that you should use the `rclone cryptcheck` command to check the
integrity of a crypted remote instead of `rclone check` which can't integrity of a crypted remote instead of `rclone check` which can't
check the checksums properly. check the checksums properly.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/crypt/crypt.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
#### --crypt-remote
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
- Config: remote
- Env Var: RCLONE_CRYPT_REMOTE
- Type: string
- Default: ""
#### --crypt-filename-encryption
How to encrypt the filenames.
- Config: filename_encryption
- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
- Type: string
- Default: "standard"
- Examples:
- "off"
- Don't encrypt the file names. Adds a ".bin" extension only.
- "standard"
- Encrypt the filenames see the docs for the details.
- "obfuscate"
- Very simple filename obfuscation.
#### --crypt-directory-name-encryption
Option to either encrypt directory names or leave them intact.
- Config: directory_name_encryption
- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
- Type: bool
- Default: true
- Examples:
- "true"
- Encrypt directory names.
- "false"
- Don't encrypt directory names, leave them intact.
#### --crypt-password
Password or pass phrase for encryption.
- Config: password
- Env Var: RCLONE_CRYPT_PASSWORD
- Type: string
- Default: ""
#### --crypt-password2
Password or pass phrase for salt. Optional but recommended.
Should be different to the previous password.
- Config: password2
- Env Var: RCLONE_CRYPT_PASSWORD2
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
#### --crypt-show-mapping
For all files listed show how the names encrypt.
If this flag is set then for each file that the remote is asked to
list, it will log (at level INFO) a line stating the decrypted file
name and the encrypted file name.
This is so you can work out which encrypted names are which decrypted
names just in case you need to do something with the encrypted file
names, or for debugging purposes.
- Config: show_mapping
- Env Var: RCLONE_CRYPT_SHOW_MAPPING
- Type: bool
- Default: false
<!--- autogenerated options stop -->
## Backing up a crypted remote ## ## Backing up a crypted remote ##

View File

@ -483,8 +483,311 @@ Google Documents.
| url | INI style link file | macOS, Windows | | url | INI style link file | macOS, Windows |
| webloc | macOS specific XML format | macOS | | webloc | macOS specific XML format | macOS |
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/drive/drive.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to drive (Google Drive).
#### --drive-client-id
Google Application Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_DRIVE_CLIENT_ID
- Type: string
- Default: ""
#### --drive-client-secret
Google Application Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_DRIVE_CLIENT_SECRET
- Type: string
- Default: ""
#### --drive-scope
Scope that rclone should use when requesting access from drive.
- Config: scope
- Env Var: RCLONE_DRIVE_SCOPE
- Type: string
- Default: ""
- Examples:
- "drive"
- Full access all files, excluding Application Data Folder.
- "drive.readonly"
- Read-only access to file metadata and file contents.
- "drive.file"
- Access to files created by rclone only.
- These are visible in the drive website.
- File authorization is revoked when the user deauthorizes the app.
- "drive.appfolder"
- Allows read and write access to the Application Data folder.
- This is not visible in the drive website.
- "drive.metadata.readonly"
- Allows read-only access to file metadata but
- does not allow any access to read or download file content.
#### --drive-root-folder-id
ID of the root folder
Leave blank normally.
Fill in to access "Computers" folders. (see docs).
- Config: root_folder_id
- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
- Type: string
- Default: ""
#### --drive-service-account-file
Service Account Credentials JSON file path
Leave blank normally.
Needed only if you want use SA instead of interactive login.
- Config: service_account_file
- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to drive (Google Drive).
#### --drive-service-account-credentials
Service Account Credentials JSON blob
Leave blank normally.
Needed only if you want use SA instead of interactive login.
- Config: service_account_credentials
- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
- Type: string
- Default: ""
#### --drive-team-drive
ID of the Team Drive
- Config: team_drive
- Env Var: RCLONE_DRIVE_TEAM_DRIVE
- Type: string
- Default: ""
#### --drive-auth-owner-only
Only consider files owned by the authenticated user.
- Config: auth_owner_only
- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
- Type: bool
- Default: false
#### --drive-use-trash
Send files to the trash instead of deleting permanently.
Defaults to true, namely sending files to the trash.
Use `--drive-use-trash=false` to delete files permanently instead.
- Config: use_trash
- Env Var: RCLONE_DRIVE_USE_TRASH
- Type: bool
- Default: true
#### --drive-skip-gdocs
Skip google documents in all listings.
If given, gdocs practically become invisible to rclone.
- Config: skip_gdocs
- Env Var: RCLONE_DRIVE_SKIP_GDOCS
- Type: bool
- Default: false
#### --drive-shared-with-me
Only show files that are shared with me.
Instructs rclone to operate on your "Shared with me" folder (where
Google Drive lets you access the files and folders others have shared
with you).
This works both with the "list" (lsd, lsl, etc) and the "copy"
commands (copy, sync, etc), and with all other commands too.
- Config: shared_with_me
- Env Var: RCLONE_DRIVE_SHARED_WITH_ME
- Type: bool
- Default: false
#### --drive-trashed-only
Only show files that are in the trash.
This will show trashed files in their original directory structure.
- Config: trashed_only
- Env Var: RCLONE_DRIVE_TRASHED_ONLY
- Type: bool
- Default: false
#### --drive-formats
Deprecated: see export_formats
- Config: formats
- Env Var: RCLONE_DRIVE_FORMATS
- Type: string
- Default: ""
#### --drive-export-formats
Comma separated list of preferred formats for downloading Google docs.
- Config: export_formats
- Env Var: RCLONE_DRIVE_EXPORT_FORMATS
- Type: string
- Default: "docx,xlsx,pptx,svg"
#### --drive-import-formats
Comma separated list of preferred formats for uploading Google docs.
- Config: import_formats
- Env Var: RCLONE_DRIVE_IMPORT_FORMATS
- Type: string
- Default: ""
#### --drive-allow-import-name-change
Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
- Config: allow_import_name_change
- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
- Type: bool
- Default: false
#### --drive-use-created-date
Use file created date instead of modified date.,
Useful when downloading data and you want the creation date used in
place of the last modified date.
**WARNING**: This flag may have some unexpected consequences.
When uploading to your drive all files will be overwritten unless they
haven't been modified since their creation. And the inverse will occur
while downloading. This side effect can be avoided by using the
"--checksum" flag.
This feature was implemented to retain photos capture date as recorded
by google photos. You will first need to check the "Create a Google
Photos folder" option in your google drive settings. You can then copy
or move the photos locally and use the date the image was taken
(created) set as the modification date.
- Config: use_created_date
- Env Var: RCLONE_DRIVE_USE_CREATED_DATE
- Type: bool
- Default: false
#### --drive-list-chunk
Size of listing chunk 100-1000. 0 to disable.
- Config: list_chunk
- Env Var: RCLONE_DRIVE_LIST_CHUNK
- Type: int
- Default: 1000
#### --drive-impersonate
Impersonate this user when using a service account.
- Config: impersonate
- Env Var: RCLONE_DRIVE_IMPERSONATE
- Type: string
- Default: ""
#### --drive-alternate-export
Use alternate export URLs for google documents export.,
If this option is set this instructs rclone to use an alternate set of
export URLs for drive documents. Users have reported that the
official export URLs can't export large documents, whereas these
unofficial ones can.
See rclone issue [#2243](https://github.com/ncw/rclone/issues/2243) for background,
[this google drive issue](https://issuetracker.google.com/issues/36761333) and
[this helpful post](https://www.labnol.org/internet/direct-links-for-google-drive/28356/).
- Config: alternate_export
- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
- Type: bool
- Default: false
#### --drive-upload-cutoff
Cutoff for switching to chunked upload
- Config: upload_cutoff
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 8M
#### --drive-chunk-size
Upload chunk size. Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk
is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
- Config: chunk_size
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
- Type: SizeSuffix
- Default: 8M
#### --drive-acknowledge-abuse
Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
If downloading a file returns the error "This file has been identified
as malware or spam and cannot be downloaded" with the error code
"cannotDownloadAbusiveFile" then supply this flag to rclone to
indicate you acknowledge the risks of downloading the file and rclone
will download it anyway.
- Config: acknowledge_abuse
- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
- Type: bool
- Default: false
#### --drive-keep-revision-forever
Keep new head revision of each file forever.
- Config: keep_revision_forever
- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
- Type: bool
- Default: false
#### --drive-v2-download-min-size
If Object's are greater, use drive v2 API to download.
- Config: v2_download_min_size
- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
- Type: SizeSuffix
- Default: off
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -123,8 +123,52 @@ Dropbox supports [its own hash
type](https://www.dropbox.com/developers/reference/content-hash) which type](https://www.dropbox.com/developers/reference/content-hash) which
is checked for all transfers. is checked for all transfers.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/dropbox/dropbox.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to dropbox (Dropbox).
#### --dropbox-client-id
Dropbox App Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_DROPBOX_CLIENT_ID
- Type: string
- Default: ""
#### --dropbox-client-secret
Dropbox App Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_DROPBOX_CLIENT_SECRET
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to dropbox (Dropbox).
#### --dropbox-chunk-size
Upload chunk size. (< 150M).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed
slightly (at most 10% for 128MB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.
- Config: chunk_size
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
- Type: SizeSuffix
- Default: 48M
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -119,8 +119,51 @@ will be time of upload.
FTP does not support any checksums. FTP does not support any checksums.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to ftp (FTP Connection).
#### --ftp-host
FTP host to connect to
- Config: host
- Env Var: RCLONE_FTP_HOST
- Type: string
- Default: ""
- Examples:
- "ftp.example.com"
- Connect to ftp.example.com
#### --ftp-user
FTP username, leave blank for current username, ncw
- Config: user
- Env Var: RCLONE_FTP_USER
- Type: string
- Default: ""
#### --ftp-port
FTP port, leave blank to use default (21)
- Config: port
- Env Var: RCLONE_FTP_PORT
- Type: string
- Default: ""
#### --ftp-pass
FTP password
- Config: pass
- Env Var: RCLONE_FTP_PASS
- Type: string
- Default: ""
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -229,5 +229,163 @@ Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns. RFC3339 format accurate to 1ns.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
#### --gcs-client-id
Google Application Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_GCS_CLIENT_ID
- Type: string
- Default: ""
#### --gcs-client-secret
Google Application Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_GCS_CLIENT_SECRET
- Type: string
- Default: ""
#### --gcs-project-number
Project number.
Optional - needed only for list/create/delete buckets - see your developer console.
- Config: project_number
- Env Var: RCLONE_GCS_PROJECT_NUMBER
- Type: string
- Default: ""
#### --gcs-service-account-file
Service Account Credentials JSON file path
Leave blank normally.
Needed only if you want use SA instead of interactive login.
- Config: service_account_file
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
- Type: string
- Default: ""
#### --gcs-service-account-credentials
Service Account Credentials JSON blob
Leave blank normally.
Needed only if you want use SA instead of interactive login.
- Config: service_account_credentials
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
- Type: string
- Default: ""
#### --gcs-object-acl
Access Control List for new objects.
- Config: object_acl
- Env Var: RCLONE_GCS_OBJECT_ACL
- Type: string
- Default: ""
- Examples:
- "authenticatedRead"
- Object owner gets OWNER access, and all Authenticated Users get READER access.
- "bucketOwnerFullControl"
- Object owner gets OWNER access, and project team owners get OWNER access.
- "bucketOwnerRead"
- Object owner gets OWNER access, and project team owners get READER access.
- "private"
- Object owner gets OWNER access [default if left blank].
- "projectPrivate"
- Object owner gets OWNER access, and project team members get access according to their roles.
- "publicRead"
- Object owner gets OWNER access, and all Users get READER access.
#### --gcs-bucket-acl
Access Control List for new buckets.
- Config: bucket_acl
- Env Var: RCLONE_GCS_BUCKET_ACL
- Type: string
- Default: ""
- Examples:
- "authenticatedRead"
- Project team owners get OWNER access, and all Authenticated Users get READER access.
- "private"
- Project team owners get OWNER access [default if left blank].
- "projectPrivate"
- Project team members get access according to their roles.
- "publicRead"
- Project team owners get OWNER access, and all Users get READER access.
- "publicReadWrite"
- Project team owners get OWNER access, and all Users get WRITER access.
#### --gcs-location
Location for the newly created buckets.
- Config: location
- Env Var: RCLONE_GCS_LOCATION
- Type: string
- Default: ""
- Examples:
- ""
- Empty for default location (US).
- "asia"
- Multi-regional location for Asia.
- "eu"
- Multi-regional location for Europe.
- "us"
- Multi-regional location for United States.
- "asia-east1"
- Taiwan.
- "asia-northeast1"
- Tokyo.
- "asia-southeast1"
- Singapore.
- "australia-southeast1"
- Sydney.
- "europe-west1"
- Belgium.
- "europe-west2"
- London.
- "us-central1"
- Iowa.
- "us-east1"
- South Carolina.
- "us-east4"
- Northern Virginia.
- "us-west1"
- Oregon.
#### --gcs-storage-class
The storage class to use when storing objects in Google Cloud Storage.
- Config: storage_class
- Env Var: RCLONE_GCS_STORAGE_CLASS
- Type: string
- Default: ""
- Examples:
- ""
- Default
- "MULTI_REGIONAL"
- Multi-regional storage class
- "REGIONAL"
- Regional storage class
- "NEARLINE"
- Nearline storage class
- "COLDLINE"
- Coldline storage class
- "DURABLE_REDUCED_AVAILABILITY"
- Durable reduced availability storage class
<!--- autogenerated options stop -->

View File

@ -126,5 +126,21 @@ without a config file:
rclone lsd --http-url https://beta.rclone.org :http: rclone lsd --http-url https://beta.rclone.org :http:
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/http/http.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to http (http Connection).
#### --http-url
URL of http host to connect to
- Config: url
- Env Var: RCLONE_HTTP_URL
- Type: string
- Default: ""
- Examples:
- "https://example.com"
- Connect to example.com
<!--- autogenerated options stop -->

View File

@ -128,8 +128,48 @@ amongst others) for storing the modification time for an object.
Note that Hubic wraps the Swift backend, so most of the properties of Note that Hubic wraps the Swift backend, so most of the properties of
are the same. are the same.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/hubic/hubic.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to hubic (Hubic).
#### --hubic-client-id
Hubic Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_HUBIC_CLIENT_ID
- Type: string
- Default: ""
#### --hubic-client-secret
Hubic Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_HUBIC_CLIENT_SECRET
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to hubic (Hubic).
#### --hubic-chunk-size
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_HUBIC_CHUNK_SIZE
- Type: SizeSuffix
- Default: 5G
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -124,8 +124,76 @@ To view your current quota you can use the `rclone about remote:`
command which will display your usage limit (unless it is unlimited) command which will display your usage limit (unless it is unlimited)
and the current usage. and the current usage.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/jottacloud/jottacloud.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to jottacloud (JottaCloud).
#### --jottacloud-user
User Name
- Config: user
- Env Var: RCLONE_JOTTACLOUD_USER
- Type: string
- Default: ""
#### --jottacloud-pass
Password.
- Config: pass
- Env Var: RCLONE_JOTTACLOUD_PASS
- Type: string
- Default: ""
#### --jottacloud-mountpoint
The mountpoint to use.
- Config: mountpoint
- Env Var: RCLONE_JOTTACLOUD_MOUNTPOINT
- Type: string
- Default: ""
- Examples:
- "Sync"
- Will be synced by the official client.
- "Archive"
- Archive
### Advanced Options
Here are the advanced options specific to jottacloud (JottaCloud).
#### --jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if required.
- Config: md5_memory_limit
- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
- Type: SizeSuffix
- Default: 10M
#### --jottacloud-hard-delete
Delete files permanently rather than putting them into the trash.
- Config: hard_delete
- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
- Type: bool
- Default: false
#### --jottacloud-unlink
Remove existing public link to file/folder with link command rather than creating.
Default is false, meaning link command will create or retrieve public link.
- Config: unlink
- Env Var: RCLONE_JOTTACLOUD_UNLINK
- Type: bool
- Default: false
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -159,5 +159,84 @@ filesystem.
**NB** This flag is only available on Unix based systems. On systems **NB** This flag is only available on Unix based systems. On systems
where it isn't supported (eg Windows) it will be ignored. where it isn't supported (eg Windows) it will be ignored.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/local/local.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to local (Local Disk).
#### --local-nounc
Disable UNC (long path names) conversion on Windows
- Config: nounc
- Env Var: RCLONE_LOCAL_NOUNC
- Type: string
- Default: ""
- Examples:
- "true"
- Disables long file names
### Advanced Options
Here are the advanced options specific to local (Local Disk).
#### --copy-links
Follow symlinks and copy the pointed to item.
- Config: copy_links
- Env Var: RCLONE_LOCAL_COPY_LINKS
- Type: bool
- Default: false
#### --skip-links
Don't warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.
- Config: skip_links
- Env Var: RCLONE_LOCAL_SKIP_LINKS
- Type: bool
- Default: false
#### --local-no-unicode-normalization
Don't apply unicode normalization to paths and filenames (Deprecated)
This flag is deprecated now. Rclone no longer normalizes unicode file
names, but it compares them with unicode normalization in the sync
routine instead.
- Config: no_unicode_normalization
- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
- Type: bool
- Default: false
#### --local-no-check-updated
Don't check to see if the files change during upload
Normally rclone checks the size and modification time of files as they
are being uploaded and aborts with a message which starts "can't copy
- source file is being updated" if the file changes during upload.
However on some file systems this modification time check may fail (eg
[Glusterfs #2206](https://github.com/ncw/rclone/issues/2206)) so this
check can be disabled with this flag.
- Config: no_check_updated
- Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
- Type: bool
- Default: false
#### --one-file-system
Don't cross filesystem boundaries (unix/macOS only).
- Config: one_file_system
- Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
- Type: bool
- Default: false
<!--- autogenerated options stop -->

View File

@ -96,8 +96,59 @@ messages in the log about duplicates.
Use `rclone dedupe` to fix duplicated files. Use `rclone dedupe` to fix duplicated files.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/mega/mega.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to mega (Mega).
#### --mega-user
User name
- Config: user
- Env Var: RCLONE_MEGA_USER
- Type: string
- Default: ""
#### --mega-pass
Password.
- Config: pass
- Env Var: RCLONE_MEGA_PASS
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to mega (Mega).
#### --mega-debug
Output more debug from Mega.
If this flag is set (along with -vv) it will print further debugging
information from the mega backend.
- Config: debug
- Env Var: RCLONE_MEGA_DEBUG
- Type: bool
- Default: false
#### --mega-hard-delete
Delete files permanently rather than putting them into the trash.
Normally the mega backend will put all deletions into the trash rather
than permanently deleting them. If you specify this then rclone will
permanently delete objects instead.
- Config: hard_delete
- Env Var: RCLONE_MEGA_HARD_DELETE
- Type: bool
- Default: false
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -155,8 +155,81 @@ doesn't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Microsoft's apps or via trash, so you will have to do that with one of Microsoft's apps or via
the OneDrive website. the OneDrive website.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/onedrive/onedrive.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to onedrive (Microsoft OneDrive).
#### --onedrive-client-id
Microsoft App Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_ONEDRIVE_CLIENT_ID
- Type: string
- Default: ""
#### --onedrive-client-secret
Microsoft App Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to onedrive (Microsoft OneDrive).
#### --onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k.
Above this size files will be chunked - must be multiple of 320k. Note
that the chunks will be buffered into memory.
- Config: chunk_size
- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
- Type: SizeSuffix
- Default: 10M
#### --onedrive-drive-id
The ID of the drive to use
- Config: drive_id
- Env Var: RCLONE_ONEDRIVE_DRIVE_ID
- Type: string
- Default: ""
#### --onedrive-drive-type
The type of the drive ( personal | business | documentLibrary )
- Config: drive_type
- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
- Type: string
- Default: ""
#### --onedrive-expose-onenote-files
Set to make OneNote files show up in directory listings.
By default rclone will hide OneNote files in directory listings because
operations like "Open" and "Update" won't work on them. But this
behaviour may also prevent you from deleting them. If you want to
delete OneNote files or otherwise want them to show up in directory
listing, set this option.
- Config: expose_onenote_files
- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
- Type: bool
- Default: false
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -93,8 +93,30 @@ OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or second. These will be used to detect whether objects need syncing or
not. not.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/opendrive/opendrive.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to opendrive (OpenDrive).
#### --opendrive-username
Username
- Config: username
- Env Var: RCLONE_OPENDRIVE_USERNAME
- Type: string
- Default: ""
#### --opendrive-password
Password.
- Config: password
- Env Var: RCLONE_OPENDRIVE_PASSWORD
- Type: string
- Default: ""
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -134,5 +134,29 @@ Deleted files will be moved to the trash. Your subscription level
will determine how long items stay in the trash. `rclone cleanup` can will determine how long items stay in the trash. `rclone cleanup` can
be used to empty the trash. be used to empty the trash.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/pcloud/pcloud.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to pcloud (Pcloud).
#### --pcloud-client-id
Pcloud App Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_PCLOUD_CLIENT_ID
- Type: string
- Default: ""
#### --pcloud-client-secret
Pcloud App Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_PCLOUD_CLIENT_SECRET
- Type: string
- Default: ""
<!--- autogenerated options stop -->

View File

@ -152,5 +152,86 @@ credentials. In order of precedence:
- Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY`
- Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY`
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/qingstor/qingstor.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to qingstor (QingCloud Object Storage).
#### --qingstor-env-auth
Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- Config: env_auth
- Env Var: RCLONE_QINGSTOR_ENV_AUTH
- Type: bool
- Default: false
- Examples:
- "false"
- Enter QingStor credentials in the next step
- "true"
- Get QingStor credentials from the environment (env vars or IAM)
#### --qingstor-access-key-id
QingStor Access Key ID
Leave blank for anonymous access or runtime credentials.
- Config: access_key_id
- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
- Type: string
- Default: ""
#### --qingstor-secret-access-key
QingStor Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
- Config: secret_access_key
- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
- Type: string
- Default: ""
#### --qingstor-endpoint
Enter a endpoint URL to connection QingStor API.
Leave blank will use the default value "https://qingstor.com:443"
- Config: endpoint
- Env Var: RCLONE_QINGSTOR_ENDPOINT
- Type: string
- Default: ""
#### --qingstor-zone
Zone to connect to.
Default is "pek3a".
- Config: zone
- Env Var: RCLONE_QINGSTOR_ZONE
- Type: string
- Default: ""
- Examples:
- "pek3a"
- The Beijing (China) Three Zone
- Needs location constraint pek3a.
- "sh1a"
- The Shanghai (China) First Zone
- Needs location constraint sh1a.
- "gd2a"
- The Guangdong (China) Second Zone
- Needs location constraint gd2a.
### Advanced Options
Here are the advanced options specific to qingstor (QingCloud Object Storage).
#### --qingstor-connection-retries
Number of connnection retries.
- Config: connection_retries
- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
- Type: int
- Default: 3
<!--- autogenerated options stop -->

View File

@ -370,8 +370,531 @@ tries to access the data you will see an error like below.
In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html) In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
the object(s) in question before using rclone. the object(s) in question before using rclone.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
#### --s3-provider
Choose your S3 provider.
- Config: provider
- Env Var: RCLONE_S3_PROVIDER
- Type: string
- Default: ""
- Examples:
- "AWS"
- Amazon Web Services (AWS) S3
- "Ceph"
- Ceph Object Storage
- "DigitalOcean"
- Digital Ocean Spaces
- "Dreamhost"
- Dreamhost DreamObjects
- "IBMCOS"
- IBM COS S3
- "Minio"
- Minio Object Storage
- "Wasabi"
- Wasabi Object Storage
- "Other"
- Any other S3 compatible provider
#### --s3-env-auth
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
- Config: env_auth
- Env Var: RCLONE_S3_ENV_AUTH
- Type: bool
- Default: false
- Examples:
- "false"
- Enter AWS credentials in the next step
- "true"
- Get AWS credentials from the environment (env vars or IAM)
#### --s3-access-key-id
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
- Config: access_key_id
- Env Var: RCLONE_S3_ACCESS_KEY_ID
- Type: string
- Default: ""
#### --s3-secret-access-key
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
- Config: secret_access_key
- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
- Type: string
- Default: ""
#### --s3-region
Region to connect to.
- Config: region
- Env Var: RCLONE_S3_REGION
- Type: string
- Default: ""
- Examples:
- "us-east-1"
- The default endpoint - a good choice if you are unsure.
- US Region, Northern Virginia or Pacific Northwest.
- Leave location constraint empty.
- "us-east-2"
- US East (Ohio) Region
- Needs location constraint us-east-2.
- "us-west-2"
- US West (Oregon) Region
- Needs location constraint us-west-2.
- "us-west-1"
- US West (Northern California) Region
- Needs location constraint us-west-1.
- "ca-central-1"
- Canada (Central) Region
- Needs location constraint ca-central-1.
- "eu-west-1"
- EU (Ireland) Region
- Needs location constraint EU or eu-west-1.
- "eu-west-2"
- EU (London) Region
- Needs location constraint eu-west-2.
- "eu-central-1"
- EU (Frankfurt) Region
- Needs location constraint eu-central-1.
- "ap-southeast-1"
- Asia Pacific (Singapore) Region
- Needs location constraint ap-southeast-1.
- "ap-southeast-2"
- Asia Pacific (Sydney) Region
- Needs location constraint ap-southeast-2.
- "ap-northeast-1"
- Asia Pacific (Tokyo) Region
- Needs location constraint ap-northeast-1.
- "ap-northeast-2"
- Asia Pacific (Seoul)
- Needs location constraint ap-northeast-2.
- "ap-south-1"
- Asia Pacific (Mumbai)
- Needs location constraint ap-south-1.
- "sa-east-1"
- South America (Sao Paulo) Region
- Needs location constraint sa-east-1.
#### --s3-region
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
- Config: region
- Env Var: RCLONE_S3_REGION
- Type: string
- Default: ""
- Examples:
- ""
- Use this if unsure. Will use v4 signatures and an empty region.
- "other-v2-signature"
- Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
#### --s3-endpoint
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Type: string
- Default: ""
#### --s3-endpoint
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Type: string
- Default: ""
- Examples:
- "s3-api.us-geo.objectstorage.softlayer.net"
- US Cross Region Endpoint
- "s3-api.dal.us-geo.objectstorage.softlayer.net"
- US Cross Region Dallas Endpoint
- "s3-api.wdc-us-geo.objectstorage.softlayer.net"
- US Cross Region Washington DC Endpoint
- "s3-api.sjc-us-geo.objectstorage.softlayer.net"
- US Cross Region San Jose Endpoint
- "s3-api.us-geo.objectstorage.service.networklayer.com"
- US Cross Region Private Endpoint
- "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
- US Cross Region Dallas Private Endpoint
- "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
- US Cross Region Washington DC Private Endpoint
- "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
- US Cross Region San Jose Private Endpoint
- "s3.us-east.objectstorage.softlayer.net"
- US Region East Endpoint
- "s3.us-east.objectstorage.service.networklayer.com"
- US Region East Private Endpoint
- "s3.us-south.objectstorage.softlayer.net"
- US Region South Endpoint
- "s3.us-south.objectstorage.service.networklayer.com"
- US Region South Private Endpoint
- "s3.eu-geo.objectstorage.softlayer.net"
- EU Cross Region Endpoint
- "s3.fra-eu-geo.objectstorage.softlayer.net"
- EU Cross Region Frankfurt Endpoint
- "s3.mil-eu-geo.objectstorage.softlayer.net"
- EU Cross Region Milan Endpoint
- "s3.ams-eu-geo.objectstorage.softlayer.net"
- EU Cross Region Amsterdam Endpoint
- "s3.eu-geo.objectstorage.service.networklayer.com"
- EU Cross Region Private Endpoint
- "s3.fra-eu-geo.objectstorage.service.networklayer.com"
- EU Cross Region Frankfurt Private Endpoint
- "s3.mil-eu-geo.objectstorage.service.networklayer.com"
- EU Cross Region Milan Private Endpoint
- "s3.ams-eu-geo.objectstorage.service.networklayer.com"
- EU Cross Region Amsterdam Private Endpoint
- "s3.eu-gb.objectstorage.softlayer.net"
- Great Britan Endpoint
- "s3.eu-gb.objectstorage.service.networklayer.com"
- Great Britan Private Endpoint
- "s3.ap-geo.objectstorage.softlayer.net"
- APAC Cross Regional Endpoint
- "s3.tok-ap-geo.objectstorage.softlayer.net"
- APAC Cross Regional Tokyo Endpoint
- "s3.hkg-ap-geo.objectstorage.softlayer.net"
- APAC Cross Regional HongKong Endpoint
- "s3.seo-ap-geo.objectstorage.softlayer.net"
- APAC Cross Regional Seoul Endpoint
- "s3.ap-geo.objectstorage.service.networklayer.com"
- APAC Cross Regional Private Endpoint
- "s3.tok-ap-geo.objectstorage.service.networklayer.com"
- APAC Cross Regional Tokyo Private Endpoint
- "s3.hkg-ap-geo.objectstorage.service.networklayer.com"
- APAC Cross Regional HongKong Private Endpoint
- "s3.seo-ap-geo.objectstorage.service.networklayer.com"
- APAC Cross Regional Seoul Private Endpoint
- "s3.mel01.objectstorage.softlayer.net"
- Melbourne Single Site Endpoint
- "s3.mel01.objectstorage.service.networklayer.com"
- Melbourne Single Site Private Endpoint
- "s3.tor01.objectstorage.softlayer.net"
- Toronto Single Site Endpoint
- "s3.tor01.objectstorage.service.networklayer.com"
- Toronto Single Site Private Endpoint
#### --s3-endpoint
Endpoint for S3 API.
Required when using an S3 clone.
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Type: string
- Default: ""
- Examples:
- "objects-us-west-1.dream.io"
- Dream Objects endpoint
- "nyc3.digitaloceanspaces.com"
- Digital Ocean Spaces New York 3
- "ams3.digitaloceanspaces.com"
- Digital Ocean Spaces Amsterdam 3
- "sgp1.digitaloceanspaces.com"
- Digital Ocean Spaces Singapore 1
- "s3.wasabisys.com"
- Wasabi Object Storage
#### --s3-location-constraint
Location constraint - must be set to match the Region.
Used when creating buckets only.
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Type: string
- Default: ""
- Examples:
- ""
- Empty for US Region, Northern Virginia or Pacific Northwest.
- "us-east-2"
- US East (Ohio) Region.
- "us-west-2"
- US West (Oregon) Region.
- "us-west-1"
- US West (Northern California) Region.
- "ca-central-1"
- Canada (Central) Region.
- "eu-west-1"
- EU (Ireland) Region.
- "eu-west-2"
- EU (London) Region.
- "EU"
- EU Region.
- "ap-southeast-1"
- Asia Pacific (Singapore) Region.
- "ap-southeast-2"
- Asia Pacific (Sydney) Region.
- "ap-northeast-1"
- Asia Pacific (Tokyo) Region.
- "ap-northeast-2"
- Asia Pacific (Seoul)
- "ap-south-1"
- Asia Pacific (Mumbai)
- "sa-east-1"
- South America (Sao Paulo) Region.
#### --s3-location-constraint
Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Type: string
- Default: ""
- Examples:
- "us-standard"
- US Cross Region Standard
- "us-vault"
- US Cross Region Vault
- "us-cold"
- US Cross Region Cold
- "us-flex"
- US Cross Region Flex
- "us-east-standard"
- US East Region Standard
- "us-east-vault"
- US East Region Vault
- "us-east-cold"
- US East Region Cold
- "us-east-flex"
- US East Region Flex
- "us-south-standard"
- US Sout hRegion Standard
- "us-south-vault"
- US South Region Vault
- "us-south-cold"
- US South Region Cold
- "us-south-flex"
- US South Region Flex
- "eu-standard"
- EU Cross Region Standard
- "eu-vault"
- EU Cross Region Vault
- "eu-cold"
- EU Cross Region Cold
- "eu-flex"
- EU Cross Region Flex
- "eu-gb-standard"
- Great Britan Standard
- "eu-gb-vault"
- Great Britan Vault
- "eu-gb-cold"
- Great Britan Cold
- "eu-gb-flex"
- Great Britan Flex
- "ap-standard"
- APAC Standard
- "ap-vault"
- APAC Vault
- "ap-cold"
- APAC Cold
- "ap-flex"
- APAC Flex
- "mel01-standard"
- Melbourne Standard
- "mel01-vault"
- Melbourne Vault
- "mel01-cold"
- Melbourne Cold
- "mel01-flex"
- Melbourne Flex
- "tor01-standard"
- Toronto Standard
- "tor01-vault"
- Toronto Vault
- "tor01-cold"
- Toronto Cold
- "tor01-flex"
- Toronto Flex
#### --s3-location-constraint
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Type: string
- Default: ""
#### --s3-acl
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
- Config: acl
- Env Var: RCLONE_S3_ACL
- Type: string
- Default: ""
- Examples:
- "private"
- Owner gets FULL_CONTROL. No one else has access rights (default).
- "public-read"
- Owner gets FULL_CONTROL. The AllUsers group gets READ access.
- "public-read-write"
- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
- Granting this on a bucket is generally not recommended.
- "authenticated-read"
- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
- "bucket-owner-read"
- Object owner gets FULL_CONTROL. Bucket owner gets READ access.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- "bucket-owner-full-control"
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- "private"
- Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
- "public-read"
- Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
- "public-read-write"
- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
- "authenticated-read"
- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
#### --s3-server-side-encryption
The server-side encryption algorithm used when storing this object in S3.
- Config: server_side_encryption
- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
- Type: string
- Default: ""
- Examples:
- ""
- None
- "AES256"
- AES256
- "aws:kms"
- aws:kms
#### --s3-sse-kms-key-id
If using KMS ID you must provide the ARN of Key.
- Config: sse_kms_key_id
- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
- Type: string
- Default: ""
- Examples:
- ""
- None
- "arn:aws:kms:us-east-1:*"
- arn:aws:kms:*
#### --s3-storage-class
The storage class to use when storing new objects in S3.
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
- Type: string
- Default: ""
- Examples:
- ""
- Default
- "STANDARD"
- Standard storage class
- "REDUCED_REDUNDANCY"
- Reduced redundancy storage class
- "STANDARD_IA"
- Standard Infrequent Access storage class
- "ONEZONE_IA"
- One Zone Infrequent Access storage class
### Advanced Options
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
#### --s3-chunk-size
Chunk size to use for uploading.
Any files larger than this will be uploaded in chunks of this
size. The default is 5MB. The minimum is 5MB.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.
- Config: chunk_size
- Env Var: RCLONE_S3_CHUNK_SIZE
- Type: SizeSuffix
- Default: 5M
#### --s3-disable-checksum
Don't store MD5 checksum with object metadata
- Config: disable_checksum
- Env Var: RCLONE_S3_DISABLE_CHECKSUM
- Type: bool
- Default: false
#### --s3-session-token
An AWS session token
- Config: session_token
- Env Var: RCLONE_S3_SESSION_TOKEN
- Type: string
- Default: ""
#### --s3-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
If you are uploading small numbers of large file over high speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.
- Config: upload_concurrency
- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
- Type: int
- Default: 2
#### --s3-force-path-style
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access,
if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (eg Aliyun OSS or Netease COS) require this set to false.
- Config: force_path_style
- Env Var: RCLONE_S3_FORCE_PATH_STYLE
- Type: bool
- Default: true
<!--- autogenerated options stop -->
### Anonymous access to public buckets ### ### Anonymous access to public buckets ###

View File

@ -158,8 +158,126 @@ upload (for example, certain configurations of ProFTPd with mod_sftp). If you
are using one of these servers, you can set the option `set_modtime = false` in are using one of these servers, you can set the option `set_modtime = false` in
your RClone backend configuration to disable this behaviour. your RClone backend configuration to disable this behaviour.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/sftp/sftp.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to sftp (SSH/SFTP Connection).
#### --sftp-host
SSH host to connect to
- Config: host
- Env Var: RCLONE_SFTP_HOST
- Type: string
- Default: ""
- Examples:
- "example.com"
- Connect to example.com
#### --sftp-user
SSH username, leave blank for current username, ncw
- Config: user
- Env Var: RCLONE_SFTP_USER
- Type: string
- Default: ""
#### --sftp-port
SSH port, leave blank to use default (22)
- Config: port
- Env Var: RCLONE_SFTP_PORT
- Type: string
- Default: ""
#### --sftp-pass
SSH password, leave blank to use ssh-agent.
- Config: pass
- Env Var: RCLONE_SFTP_PASS
- Type: string
- Default: ""
#### --sftp-key-file
Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- Config: key_file
- Env Var: RCLONE_SFTP_KEY_FILE
- Type: string
- Default: ""
#### --sftp-use-insecure-cipher
Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
- Config: use_insecure_cipher
- Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
- Type: bool
- Default: false
- Examples:
- "false"
- Use default Cipher list.
- "true"
- Enables the use of the aes128-cbc cipher.
#### --sftp-disable-hashcheck
Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
- Config: disable_hashcheck
- Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
- Type: bool
- Default: false
### Advanced Options
Here are the advanced options specific to sftp (SSH/SFTP Connection).
#### --sftp-ask-password
Allow asking for SFTP password when needed.
- Config: ask_password
- Env Var: RCLONE_SFTP_ASK_PASSWORD
- Type: bool
- Default: false
#### --sftp-path-override
Override path used by SSH connection.
This allows checksum calculation when SFTP and SSH paths are
different. This issue affects among others Synology NAS boxes.
Shared folders can be found in directories representing volumes
rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
Home directory can be found in a shared folder called "home"
rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
- Config: path_override
- Env Var: RCLONE_SFTP_PATH_OVERRIDE
- Type: string
- Default: ""
#### --sftp-set-modtime
Set the modified time on the remote if set.
- Config: set_modtime
- Env Var: RCLONE_SFTP_SET_MODTIME
- Type: bool
- Default: true
<!--- autogenerated options stop -->
### Limitations ### ### Limitations ###

View File

@ -261,8 +261,200 @@ sufficient to determine if it is "dirty". By using `--update` along with
`--use-server-modtime`, you can avoid the extra API call and simply upload `--use-server-modtime`, you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded. files whose local modtime is newer than the time it was last uploaded.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/swift/swift.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
#### --swift-env-auth
Get swift credentials from environment variables in standard OpenStack form.
- Config: env_auth
- Env Var: RCLONE_SWIFT_ENV_AUTH
- Type: bool
- Default: false
- Examples:
- "false"
- Enter swift credentials in the next step
- "true"
- Get swift credentials from environment vars. Leave other fields blank if using this.
#### --swift-user
User name to log in (OS_USERNAME).
- Config: user
- Env Var: RCLONE_SWIFT_USER
- Type: string
- Default: ""
#### --swift-key
API key or password (OS_PASSWORD).
- Config: key
- Env Var: RCLONE_SWIFT_KEY
- Type: string
- Default: ""
#### --swift-auth
Authentication URL for server (OS_AUTH_URL).
- Config: auth
- Env Var: RCLONE_SWIFT_AUTH
- Type: string
- Default: ""
- Examples:
- "https://auth.api.rackspacecloud.com/v1.0"
- Rackspace US
- "https://lon.auth.api.rackspacecloud.com/v1.0"
- Rackspace UK
- "https://identity.api.rackspacecloud.com/v2.0"
- Rackspace v2
- "https://auth.storage.memset.com/v1.0"
- Memset Memstore UK
- "https://auth.storage.memset.com/v2.0"
- Memset Memstore UK v2
- "https://auth.cloud.ovh.net/v2.0"
- OVH
#### --swift-user-id
User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- Config: user_id
- Env Var: RCLONE_SWIFT_USER_ID
- Type: string
- Default: ""
#### --swift-domain
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- Config: domain
- Env Var: RCLONE_SWIFT_DOMAIN
- Type: string
- Default: ""
#### --swift-tenant
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- Config: tenant
- Env Var: RCLONE_SWIFT_TENANT
- Type: string
- Default: ""
#### --swift-tenant-id
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- Config: tenant_id
- Env Var: RCLONE_SWIFT_TENANT_ID
- Type: string
- Default: ""
#### --swift-tenant-domain
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- Config: tenant_domain
- Env Var: RCLONE_SWIFT_TENANT_DOMAIN
- Type: string
- Default: ""
#### --swift-region
Region name - optional (OS_REGION_NAME)
- Config: region
- Env Var: RCLONE_SWIFT_REGION
- Type: string
- Default: ""
#### --swift-storage-url
Storage URL - optional (OS_STORAGE_URL)
- Config: storage_url
- Env Var: RCLONE_SWIFT_STORAGE_URL
- Type: string
- Default: ""
#### --swift-auth-token
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- Config: auth_token
- Env Var: RCLONE_SWIFT_AUTH_TOKEN
- Type: string
- Default: ""
#### --swift-auth-version
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- Config: auth_version
- Env Var: RCLONE_SWIFT_AUTH_VERSION
- Type: int
- Default: 0
#### --swift-endpoint-type
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
- Config: endpoint_type
- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
- Type: string
- Default: "public"
- Examples:
- "public"
- Public (default, choose this if not sure)
- "internal"
- Internal (use internal service net)
- "admin"
- Admin
#### --swift-storage-policy
The storage policy to use when creating a new container
This applies the specified storage policy when creating a new
container. The policy cannot be changed afterwards. The allowed
configuration values and their meaning depend on your Swift storage
provider.
- Config: storage_policy
- Env Var: RCLONE_SWIFT_STORAGE_POLICY
- Type: string
- Default: ""
- Examples:
- ""
- Default
- "pcs"
- OVH Public Cloud Storage
- "pca"
- OVH Public Cloud Archive
### Advanced Options
Here are the advanced options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
#### --swift-chunk-size
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.
- Config: chunk_size
- Env Var: RCLONE_SWIFT_CHUNK_SIZE
- Type: SizeSuffix
- Default: 5G
<!--- autogenerated options stop -->
### Modified time ### ### Modified time ###

View File

@ -142,5 +142,20 @@ Copy another local directory to the union directory called source, which will be
rclone copy C:\source remote:source rclone copy C:\source remote:source
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/union/union.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes).
#### --union-remotes
List of space separated remotes.
Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
The last remote is used to write to.
- Config: remotes
- Env Var: RCLONE_UNION_REMOTES
- Type: string
- Default: ""
<!--- autogenerated options stop -->

View File

@ -101,8 +101,69 @@ Owncloud or Nextcloud rclone will support modified times.
Hashes are not supported. Hashes are not supported.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/webdav/webdav.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to webdav (Webdav).
#### --webdav-url
URL of http host to connect to
- Config: url
- Env Var: RCLONE_WEBDAV_URL
- Type: string
- Default: ""
- Examples:
- "https://example.com"
- Connect to example.com
#### --webdav-vendor
Name of the Webdav site/service/software you are using
- Config: vendor
- Env Var: RCLONE_WEBDAV_VENDOR
- Type: string
- Default: ""
- Examples:
- "nextcloud"
- Nextcloud
- "owncloud"
- Owncloud
- "sharepoint"
- Sharepoint
- "other"
- Other site/service or software
#### --webdav-user
User name
- Config: user
- Env Var: RCLONE_WEBDAV_USER
- Type: string
- Default: ""
#### --webdav-pass
Password.
- Config: pass
- Env Var: RCLONE_WEBDAV_PASS
- Type: string
- Default: ""
#### --webdav-bearer-token
Bearer token instead of user/pass (eg a Macaroon)
- Config: bearer_token
- Env Var: RCLONE_WEBDAV_BEARER_TOKEN
- Type: string
- Default: ""
<!--- autogenerated options stop -->
## Provider notes ## ## Provider notes ##

View File

@ -128,5 +128,29 @@ If you wish to empty your trash you can use the `rclone cleanup remote:`
command which will permanently delete all your trashed files. This command command which will permanently delete all your trashed files. This command
does not take any path arguments. does not take any path arguments.
<!--- autogenerated options start - edit in backend/backend.go options --> <!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/yandex/yandex.go then run make backenddocs -->
<!--- autogenerated options stop --> ### Standard Options
Here are the standard options specific to yandex (Yandex Disk).
#### --yandex-client-id
Yandex Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_YANDEX_CLIENT_ID
- Type: string
- Default: ""
#### --yandex-client-secret
Yandex Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_YANDEX_CLIENT_SECRET
- Type: string
- Default: ""
<!--- autogenerated options stop -->