Version v1.43

This commit is contained in:
Nick Craig-Wood 2018-09-01 12:58:00 +01:00
parent a3fec7f030
commit 20c55a6829
67 changed files with 23364 additions and 12145 deletions

File diff suppressed because it is too large Load Diff

3246
MANUAL.md

File diff suppressed because it is too large Load Diff

1946
MANUAL.txt

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,12 +1,12 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone" title: "rclone"
slug: rclone slug: rclone
url: /commands/rclone/ url: /commands/rclone/
--- ---
## rclone ## rclone
Sync files and directories to and from local and remote object stores - v1.42 Sync files and directories to and from local and remote object stores - v1.43
### Synopsis ### Synopsis
@ -24,6 +24,7 @@ from various cloud storage systems and using file transfer services, such as:
* Google Drive * Google Drive
* HTTP * HTTP
* Hubic * Hubic
* Jottacloud
* Mega * Mega
* Microsoft Azure Blob Storage * Microsoft Azure Blob Storage
* Microsoft OneDrive * Microsoft OneDrive
@ -59,150 +60,259 @@ rclone [flags]
### Options ### Options
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
-h, --help help for rclone --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-checksum Skip post copy check of checksums. --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-errors delete even if there are I/O errors --drive-client-id string Google Application Client Id
--ignore-existing Skip all files that exist on destination --drive-client-secret string Google Application Client Secret
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-impersonate string Impersonate this user when using a service account.
--immutable Do not modify files. Fail if existing files have been modified. --drive-keep-revision-forever Keep new head revision forever.
--include stringArray Include files matching pattern --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--include-from stringArray Read include patterns from file --drive-root-folder-id string ID of the root folder
--local-no-check-updated Don't check to see if the files change during upload --drive-scope string Scope that rclone should use when requesting access from drive.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-service-account-file string Service Account Credentials JSON file path
--log-file string Log everything to this file --drive-shared-with-me Only show files that are shared with me
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-skip-gdocs Skip google documents in all listings.
--low-level-retries int Number of low level retries to do. (default 10) --drive-trashed-only Only show files that are in the trash
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-use-created-date Use created date instead of modified date.
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-client-id string Dropbox App Client Id
--mega-debug If set then output more debug from mega. --dropbox-client-secret string Dropbox App Client Secret
--memprofile string Write memory profile to file -n, --dry-run Do a trial run with no permanent changes
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-headers Dump HTTP bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --exclude stringArray Exclude files matching pattern
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude-from stringArray Read exclude patterns from file
--no-traverse Obsolete - does nothing. --exclude-if-present string Exclude directories if filename is present
--no-update-modtime Don't update destination mod-time if files identical. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
-x, --one-file-system Don't cross filesystem boundaries. --files-from stringArray Read list of source-file names from file
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) -f, --filter stringArray Add a file-filtering rule
-q, --quiet Print as little stuff as possible --filter-from stringArray Read filtering patterns from a file
--rc Enable the remote control server. --ftp-host string FTP host to connect to
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-pass string FTP password
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-port string FTP port, leave blank to use default (21)
--rc-client-ca string Client certificate authority to verify clients with --ftp-user string FTP username, leave blank for current username, ncw
--rc-htpasswd string htpasswd file - if not provided no authentication is done --gcs-bucket-acl string Access Control List for new buckets.
--rc-key string SSL PEM Private key --gcs-client-id string Google Application Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-secret string Google Application Client Secret
--rc-pass string Password for authentication. --gcs-location string Location for the newly created buckets.
--rc-realm string realm for authentication (default "rclone") --gcs-object-acl string Access Control List for new objects.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-project-number string Project number.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-service-account-file string Service Account Credentials JSON file path
--rc-user string User name for authentication. --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries int Retry operations this many times if they fail (default 3) -h, --help help for rclone
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
-V, --version Print the version number --mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
@ -216,12 +326,13 @@ rclone [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
* [rclone copyurl](/commands/rclone_copyurl/) - Copy url content to dest.
* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote. * [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote.
* [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names. * [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names.
* [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbox hash file for all the objects in the path. * [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbox hash file for all the objects in the path.
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files and delete/rename them. * [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files and delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path. * [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file path from remote. * [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied. * [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone hashsum](/commands/rclone_hashsum/) - Produces an hashsum file for all the objects in the path. * [rclone hashsum](/commands/rclone_hashsum/) - Produces an hashsum file for all the objects in the path.
@ -252,4 +363,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number. * [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone about" title: "rclone about"
slug: rclone_about slug: rclone_about
url: /commands/rclone_about/ url: /commands/rclone_about/
@ -69,152 +69,261 @@ rclone about remote: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone authorize" title: "rclone authorize"
slug: rclone_authorize slug: rclone_authorize
url: /commands/rclone_authorize/ url: /commands/rclone_authorize/
@ -28,152 +28,261 @@ rclone authorize [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone cachestats" title: "rclone cachestats"
slug: rclone_cachestats slug: rclone_cachestats
url: /commands/rclone_cachestats/ url: /commands/rclone_cachestats/
@ -27,152 +27,261 @@ rclone cachestats source: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone cat" title: "rclone cat"
slug: rclone_cat slug: rclone_cat
url: /commands/rclone_cat/ url: /commands/rclone_cat/
@ -49,152 +49,261 @@ rclone cat remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone check" title: "rclone check"
slug: rclone_check slug: rclone_check
url: /commands/rclone_check/ url: /commands/rclone_check/
@ -43,152 +43,261 @@ rclone check source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone cleanup" title: "rclone cleanup"
slug: rclone_cleanup slug: rclone_cleanup
url: /commands/rclone_cleanup/ url: /commands/rclone_cleanup/
@ -28,152 +28,261 @@ rclone cleanup remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config" title: "rclone config"
slug: rclone_config slug: rclone_config
url: /commands/rclone_config/ url: /commands/rclone_config/
@ -28,153 +28,262 @@ rclone config [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote <name>. * [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote <name>.
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON. * [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
@ -185,4 +294,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config create" title: "rclone config create"
slug: rclone_config_create slug: rclone_config_create
url: /commands/rclone_config_create/ url: /commands/rclone_config_create/
@ -33,152 +33,261 @@ rclone config create <name> <type> [<key> <value>]* [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config delete" title: "rclone config delete"
slug: rclone_config_delete slug: rclone_config_delete
url: /commands/rclone_config_delete/ url: /commands/rclone_config_delete/
@ -25,152 +25,261 @@ rclone config delete <name> [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config dump" title: "rclone config dump"
slug: rclone_config_dump slug: rclone_config_dump
url: /commands/rclone_config_dump/ url: /commands/rclone_config_dump/
@ -25,152 +25,261 @@ rclone config dump [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config edit" title: "rclone config edit"
slug: rclone_config_edit slug: rclone_config_edit
url: /commands/rclone_config_edit/ url: /commands/rclone_config_edit/
@ -28,152 +28,261 @@ rclone config edit [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config file" title: "rclone config file"
slug: rclone_config_file slug: rclone_config_file
url: /commands/rclone_config_file/ url: /commands/rclone_config_file/
@ -25,152 +25,261 @@ rclone config file [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config password" title: "rclone config password"
slug: rclone_config_password slug: rclone_config_password
url: /commands/rclone_config_password/ url: /commands/rclone_config_password/
@ -32,152 +32,261 @@ rclone config password <name> [<key> <value>]+ [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config providers" title: "rclone config providers"
slug: rclone_config_providers slug: rclone_config_providers
url: /commands/rclone_config_providers/ url: /commands/rclone_config_providers/
@ -25,152 +25,261 @@ rclone config providers [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config show" title: "rclone config show"
slug: rclone_config_show slug: rclone_config_show
url: /commands/rclone_config_show/ url: /commands/rclone_config_show/
@ -25,152 +25,261 @@ rclone config show [<remote>] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone config update" title: "rclone config update"
slug: rclone_config_update slug: rclone_config_update
url: /commands/rclone_config_update/ url: /commands/rclone_config_update/
@ -32,152 +32,261 @@ rclone config update <name> [<key> <value>]+ [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone copy" title: "rclone copy"
slug: rclone_copy slug: rclone_copy
url: /commands/rclone_copy/ url: /commands/rclone_copy/
@ -61,152 +61,261 @@ rclone copy source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone copyto" title: "rclone copyto"
slug: rclone_copyto slug: rclone_copyto
url: /commands/rclone_copyto/ url: /commands/rclone_copyto/
@ -51,152 +51,261 @@ rclone copyto source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -0,0 +1,288 @@
---
date: 2018-09-01T12:54:54+01:00
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/
---
## rclone copyurl
Copy url content to dest.
### Synopsis
Download urls content and copy it to destination
without saving it in tmp storage.
```
rclone copyurl https://example.com dest:path [flags]
```
### Options
```
-h, --help help for copyurl
```
### Options inherited from parent commands
```
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
--fast-list Use recursive list if available. Uses more memory but fewer transactions.
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone cryptcheck" title: "rclone cryptcheck"
slug: rclone_cryptcheck slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/ url: /commands/rclone_cryptcheck/
@ -53,152 +53,261 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone cryptdecode" title: "rclone cryptdecode"
slug: rclone_cryptdecode slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/ url: /commands/rclone_cryptdecode/
@ -37,152 +37,261 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone dbhashsum" title: "rclone dbhashsum"
slug: rclone_dbhashsum slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/ url: /commands/rclone_dbhashsum/
@ -30,152 +30,261 @@ rclone dbhashsum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone dedupe" title: "rclone dedupe"
slug: rclone_dedupe slug: rclone_dedupe
url: /commands/rclone_dedupe/ url: /commands/rclone_dedupe/
@ -106,152 +106,261 @@ rclone dedupe [mode] remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone delete" title: "rclone delete"
slug: rclone_delete slug: rclone_delete
url: /commands/rclone_delete/ url: /commands/rclone_delete/
@ -42,152 +42,261 @@ rclone delete remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,17 +1,17 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone deletefile" title: "rclone deletefile"
slug: rclone_deletefile slug: rclone_deletefile
url: /commands/rclone_deletefile/ url: /commands/rclone_deletefile/
--- ---
## rclone deletefile ## rclone deletefile
Remove a single file path from remote. Remove a single file from remote.
### Synopsis ### Synopsis
Remove a single file path from remote. Unlike `delete` it cannot be used to Remove a single file from remote. Unlike `delete` it cannot be used to
remove a directory and it doesn't obey include/exclude filters - if the specified file exists, remove a directory and it doesn't obey include/exclude filters - if the specified file exists,
it will always be removed. it will always be removed.
@ -29,152 +29,261 @@ rclone deletefile remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete" title: "rclone genautocomplete"
slug: rclone_genautocomplete slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/ url: /commands/rclone_genautocomplete/
@ -24,154 +24,263 @@ Run with --help to list the supported shells.
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. * [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete bash" title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/ url: /commands/rclone_genautocomplete_bash/
@ -40,152 +40,261 @@ rclone genautocomplete bash [output_file] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete zsh" title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/ url: /commands/rclone_genautocomplete_zsh/
@ -40,152 +40,261 @@ rclone genautocomplete zsh [output_file] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone gendocs" title: "rclone gendocs"
slug: rclone_gendocs slug: rclone_gendocs
url: /commands/rclone_gendocs/ url: /commands/rclone_gendocs/
@ -28,152 +28,261 @@ rclone gendocs output_directory [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone hashsum" title: "rclone hashsum"
slug: rclone_hashsum slug: rclone_hashsum
url: /commands/rclone_hashsum/ url: /commands/rclone_hashsum/
@ -42,152 +42,261 @@ rclone hashsum <hash> remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone link" title: "rclone link"
slug: rclone_link slug: rclone_link
url: /commands/rclone_link/ url: /commands/rclone_link/
@ -35,152 +35,261 @@ rclone link remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone listremotes" title: "rclone listremotes"
slug: rclone_listremotes slug: rclone_listremotes
url: /commands/rclone_listremotes/ url: /commands/rclone_listremotes/
@ -30,152 +30,261 @@ rclone listremotes [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone ls" title: "rclone ls"
slug: rclone_ls slug: rclone_ls
url: /commands/rclone_ls/ url: /commands/rclone_ls/
@ -59,152 +59,261 @@ rclone ls remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone lsd" title: "rclone lsd"
slug: rclone_lsd slug: rclone_lsd
url: /commands/rclone_lsd/ url: /commands/rclone_lsd/
@ -70,152 +70,261 @@ rclone lsd remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone lsf" title: "rclone lsf"
slug: rclone_lsf slug: rclone_lsf
url: /commands/rclone_lsf/ url: /commands/rclone_lsf/
@ -148,152 +148,261 @@ rclone lsf remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone lsjson" title: "rclone lsjson"
slug: rclone_lsjson slug: rclone_lsjson
url: /commands/rclone_lsjson/ url: /commands/rclone_lsjson/
@ -21,6 +21,7 @@ The output is an array of Items, where each Item looks like this
"DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
}, },
"ID": "y2djkhiujf83u33", "ID": "y2djkhiujf83u33",
"OrigID": "UYOJVTUW00Q1RzTDA",
"IsDir" : false, "IsDir" : false,
"MimeType" : "application/octet-stream", "MimeType" : "application/octet-stream",
"ModTime" : "2017-05-31T16:15:57.034468261+01:00", "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
@ -80,158 +81,268 @@ rclone lsjson remote:path [flags]
--hash Include hashes in the output (may take longer). --hash Include hashes in the output (may take longer).
-h, --help help for lsjson -h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up). --no-modtime Don't read the modification time (can speed things up).
--original Show the ID of the underlying Object.
-R, --recursive Recurse into the listing. -R, --recursive Recurse into the listing.
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone lsl" title: "rclone lsl"
slug: rclone_lsl slug: rclone_lsl
url: /commands/rclone_lsl/ url: /commands/rclone_lsl/
@ -59,152 +59,261 @@ rclone lsl remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone md5sum" title: "rclone md5sum"
slug: rclone_md5sum slug: rclone_md5sum
url: /commands/rclone_md5sum/ url: /commands/rclone_md5sum/
@ -28,152 +28,261 @@ rclone md5sum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone mkdir" title: "rclone mkdir"
slug: rclone_mkdir slug: rclone_mkdir
url: /commands/rclone_mkdir/ url: /commands/rclone_mkdir/
@ -25,152 +25,261 @@ rclone mkdir remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone mount" title: "rclone mount"
slug: rclone_mount slug: rclone_mount
url: /commands/rclone_mount/ url: /commands/rclone_mount/
@ -182,6 +182,23 @@ Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching ### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care! **NB** File caching is **EXPERIMENTAL** - use with care!
@ -284,6 +301,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--allow-root Allow access to root user. --allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s) --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode). --daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v. --debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode. --default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@ -302,8 +320,8 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks. --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. -1 is unlimited. --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes). --volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
``` ```
@ -311,152 +329,261 @@ rclone mount remote:path /path/to/mountpoint [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone move" title: "rclone move"
slug: rclone_move slug: rclone_move
url: /commands/rclone_move/ url: /commands/rclone_move/
@ -45,152 +45,261 @@ rclone move source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone moveto" title: "rclone moveto"
slug: rclone_moveto slug: rclone_moveto
url: /commands/rclone_moveto/ url: /commands/rclone_moveto/
@ -54,152 +54,261 @@ rclone moveto source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone ncdu" title: "rclone ncdu"
slug: rclone_ncdu slug: rclone_ncdu
url: /commands/rclone_ncdu/ url: /commands/rclone_ncdu/
@ -52,152 +52,261 @@ rclone ncdu remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone obscure" title: "rclone obscure"
slug: rclone_obscure slug: rclone_obscure
url: /commands/rclone_obscure/ url: /commands/rclone_obscure/
@ -25,152 +25,261 @@ rclone obscure password [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone purge" title: "rclone purge"
slug: rclone_purge slug: rclone_purge
url: /commands/rclone_purge/ url: /commands/rclone_purge/
@ -29,152 +29,261 @@ rclone purge remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone rc" title: "rclone rc"
slug: rclone_rc slug: rclone_rc
url: /commands/rclone_rc/ url: /commands/rclone_rc/
@ -35,152 +35,261 @@ rclone rc commands parameter [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone rcat" title: "rclone rcat"
slug: rclone_rcat slug: rclone_rcat
url: /commands/rclone_rcat/ url: /commands/rclone_rcat/
@ -47,152 +47,261 @@ rclone rcat remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone rmdir" title: "rclone rmdir"
slug: rclone_rmdir slug: rclone_rmdir
url: /commands/rclone_rmdir/ url: /commands/rclone_rmdir/
@ -27,152 +27,261 @@ rclone rmdir remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone rmdirs" title: "rclone rmdirs"
slug: rclone_rmdirs slug: rclone_rmdirs
url: /commands/rclone_rmdirs/ url: /commands/rclone_rmdirs/
@ -35,152 +35,261 @@ rclone rmdirs remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone serve" title: "rclone serve"
slug: rclone_serve slug: rclone_serve
url: /commands/rclone_serve/ url: /commands/rclone_serve/
@ -31,155 +31,264 @@ rclone serve <protocol> [opts] <remote> [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone serve http" title: "rclone serve http"
slug: rclone_serve_http slug: rclone_serve_http
url: /commands/rclone_serve_http/ url: /commands/rclone_serve_http/
@ -96,6 +96,23 @@ Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching ### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care! **NB** File caching is **EXPERIMENTAL** - use with care!
@ -217,159 +234,268 @@ rclone serve http remote:path [flags]
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks. --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. -1 is unlimited. --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone serve restic" title: "rclone serve restic"
slug: rclone_serve_restic slug: rclone_serve_restic
url: /commands/rclone_serve_restic/ url: /commands/rclone_serve_restic/
@ -161,152 +161,261 @@ rclone serve restic remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone serve webdav" title: "rclone serve webdav"
slug: rclone_serve_webdav slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/ url: /commands/rclone_serve_webdav/
@ -16,8 +16,19 @@ remote over HTTP via the webdav protocol. This can be viewed with a
webdav client or you can make a remote of type webdav to read and webdav client or you can make a remote of type webdav to read and
write it. write it.
NB at the moment each directory listing reads the start of each file ### Webdav options
which is undesirable: see https://github.com/golang/go/issues/22577
#### --etag-hash
This controls the ETag header. Without this flag the ETag will be
based on the ModTime and Size of the object.
If this flag is set to "auto" then rclone will choose the first
supported hash on the backend or you can use a named hash such as
"MD5" or "SHA-1".
Use "rclone hashsum" to see the full list.
### Server options ### Server options
@ -93,6 +104,23 @@ Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching ### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care! **NB** File caching is **EXPERIMENTAL** - use with care!
@ -194,6 +222,7 @@ rclone serve webdav remote:path [flags]
--cert string SSL PEM key (concatenation of certificate and CA certificate) --cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s) --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--gid uint32 Override the gid field set by the filesystem. (default 502) --gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for webdav -h, --help help for webdav
--htpasswd string htpasswd file - if not provided no authentication is done --htpasswd string htpasswd file - if not provided no authentication is done
@ -214,159 +243,268 @@ rclone serve webdav remote:path [flags]
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off") --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks. --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. -1 is unlimited. --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone sha1sum" title: "rclone sha1sum"
slug: rclone_sha1sum slug: rclone_sha1sum
url: /commands/rclone_sha1sum/ url: /commands/rclone_sha1sum/
@ -28,152 +28,261 @@ rclone sha1sum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone size" title: "rclone size"
slug: rclone_size slug: rclone_size
url: /commands/rclone_size/ url: /commands/rclone_size/
@ -26,152 +26,261 @@ rclone size remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone sync" title: "rclone sync"
slug: rclone_sync slug: rclone_sync
url: /commands/rclone_sync/ url: /commands/rclone_sync/
@ -44,152 +44,261 @@ rclone sync source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone touch" title: "rclone touch"
slug: rclone_touch slug: rclone_touch
url: /commands/rclone_touch/ url: /commands/rclone_touch/
@ -27,152 +27,261 @@ rclone touch remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone tree" title: "rclone tree"
slug: rclone_tree slug: rclone_tree
url: /commands/rclone_tree/ url: /commands/rclone_tree/
@ -68,152 +68,261 @@ rclone tree remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-06-16T18:20:28+01:00 date: 2018-09-01T12:54:54+01:00
title: "rclone version" title: "rclone version"
slug: rclone_version slug: rclone_version
url: /commands/rclone_version/ url: /commands/rclone_version/
@ -10,7 +10,34 @@ Show the version number.
### Synopsis ### Synopsis
Show the version number.
Show the version number, the go version and the architecture.
Eg
$ rclone version
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
If you supply the --check flag, then it will do an online check to
compare your version with the latest release and the latest beta.
$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
beta: 1.42.0.5 (released 2018-06-17)
Or
$ rclone version --check
yours: 1.41
latest: 1.42 (released 2018-06-16)
upgrade: https://downloads.rclone.org/v1.42
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
``` ```
rclone version [flags] rclone version [flags]
@ -19,158 +46,268 @@ rclone version [flags]
### Options ### Options
``` ```
-h, --help help for version --check Check for new version.
-h, --help help for version
``` ```
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G) --acd-auth-url string Auth server URL.
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-client-id string Amazon Application Client ID.
--ask-password Allow prompt for password for encrypted configuration. (default true) --acd-client-secret string Amazon Application Client Secret.
--auto-confirm If enabled, do not request console confirmation. --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M) --acd-token-url string Token server url.
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M) --alias-remote string Remote or path to alias.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-test-mode string A flag string for X-Bz-Test-Mode header. --auto-confirm If enabled, do not request console confirmation.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M) --azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--b2-versions Include old versions in directory listings. --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--backup-dir string Make backups into hierarchy based in DIR. --azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --azureblob-endpoint string Endpoint for the service
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--buffer-size int Buffer size when copying files. (default 16M) --azureblob-sas-url string SAS URL for container level access only
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m") --b2-account string Account ID or Application Key ID
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend") --b2-endpoint string Endpoint for the service.
--cache-chunk-size string The size of a chunk (default "5M") --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --b2-key string Application Key
--cache-db-purge Purge the cache DB before --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --b2-versions Include old versions in directory listings.
--cache-info-age string How much time should object info be stored in cache (default "6h") --backup-dir string Make backups into hierarchy based in DIR.
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --box-client-id string Box App Client Id.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --box-client-secret string Box App Client Secret
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m") --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G") --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-writes Will cache file data on writes through the FS --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8) --cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--config string Config file. (default "/home/ncw/.rclone.conf") --cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--contimeout duration Connect timeout (default 1m0s) --cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
-L, --copy-links Follow symlinks and copy the pointed to item. --cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cpuprofile string Write cpu profile to file --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--crypt-show-mapping For all files listed show how the names encrypt. --cache-db-purge Purge the cache DB before
--delete-after When synchronizing, delete files on destination after transfering --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--delete-before When synchronizing, delete files on destination before transfering --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--delete-during When synchronizing, delete files during transfer (default) --cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--delete-excluded Delete files on dest excluded from sync --cache-plex-password string The password of the Plex user
--disable string Disable a comma separated list of features. Use help to see a list. --cache-plex-url string The URL of the Plex server
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --cache-plex-username string The username of the Plex user
--drive-alternate-export Use alternate export URLs for google documents export. --cache-read-retries int How many times to retry a read from a cache storage (default 10)
--drive-auth-owner-only Only consider files owned by the authenticated user. --cache-remote string Remote to cache.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M) --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--drive-impersonate string Impersonate this user when using a service account. --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --cache-workers int How many workers should run in parallel to download chunks (default 4)
--drive-shared-with-me Only show files that are shared with me --cache-writes Will cache file data on writes through the FS
--drive-skip-gdocs Skip google documents in all listings. --checkers int Number of checkers to run in parallel. (default 8)
--drive-trashed-only Only show files that are in the trash -c, --checksum Skip based on checksum & size, not mod-time & size
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M) --config string Config file. (default "/home/ncw/.rclone.conf")
--drive-use-created-date Use created date instead of modified date. --contimeout duration Connect timeout (default 1m0s)
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M) --cpuprofile string Write cpu profile to file
-n, --dry-run Do a trial run with no permanent changes --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --crypt-password string Password or pass phrase for encryption.
--dump-headers Dump HTTP bodies - may contain sensitive info --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--exclude stringArray Exclude files matching pattern --crypt-remote string Remote to encrypt/decrypt.
--exclude-from stringArray Read exclude patterns from file --crypt-show-mapping For all files listed show how the names encrypt.
--exclude-if-present string Exclude directories if filename is present --delete-after When synchronizing, delete files on destination after transfering (default)
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --delete-before When synchronizing, delete files on destination before transfering
--files-from stringArray Read list of source-file names from file --delete-during When synchronizing, delete files during transfer
-f, --filter stringArray Add a file-filtering rule --delete-excluded Delete files on dest excluded from sync
--filter-from stringArray Read filtering patterns from a file --disable string Disable a comma separated list of features. Use help to see a list.
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2). --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY). --drive-alternate-export Use alternate export URLs for google documents export.
--ignore-checksum Skip post copy check of checksums. --drive-auth-owner-only Only consider files owned by the authenticated user.
--ignore-errors delete even if there are I/O errors --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--ignore-existing Skip all files that exist on destination --drive-client-id string Google Application Client Id
--ignore-size Ignore size when skipping use mod-time or checksum. --drive-client-secret string Google Application Client Secret
-I, --ignore-times Don't skip files that match size and time - transfer all files --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--immutable Do not modify files. Fail if existing files have been modified. --drive-impersonate string Impersonate this user when using a service account.
--include stringArray Include files matching pattern --drive-keep-revision-forever Keep new head revision forever.
--include-from stringArray Read include patterns from file --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--local-no-check-updated Don't check to see if the files change during upload --drive-root-folder-id string ID of the root folder
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames --drive-scope string Scope that rclone should use when requesting access from drive.
--log-file string Log everything to this file --drive-service-account-file string Service Account Credentials JSON file path
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --drive-shared-with-me Only show files that are shared with me
--low-level-retries int Number of low level retries to do. (default 10) --drive-skip-gdocs Skip google documents in all listings.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --drive-trashed-only Only show files that are in the trash
--max-delete int When synchronizing, limit the number of deletes (default -1) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--max-depth int If set limits the recursion depth to this. (default -1) --drive-use-created-date Use created date instead of modified date.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--max-transfer int Maximum size of data to transfer. (default off) --dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--mega-debug If set then output more debug from mega. --dropbox-client-id string Dropbox App Client Id
--memprofile string Write memory profile to file --dropbox-client-secret string Dropbox App Client Secret
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) -n, --dry-run Do a trial run with no permanent changes
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--modify-window duration Max time diff to be considered the same (default 1ns) --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--no-check-certificate Do not verify the server SSL certificate. Insecure. --dump-headers Dump HTTP bodies - may contain sensitive info
--no-gzip-encoding Don't set Accept-Encoding: gzip. --exclude stringArray Exclude files matching pattern
--no-traverse Obsolete - does nothing. --exclude-from stringArray Read exclude patterns from file
--no-update-modtime Don't update destination mod-time if files identical. --exclude-if-present string Exclude directories if filename is present
-x, --one-file-system Don't cross filesystem boundaries. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M) --files-from stringArray Read list of source-file names from file
-q, --quiet Print as little stuff as possible -f, --filter stringArray Add a file-filtering rule
--rc Enable the remote control server. --filter-from stringArray Read filtering patterns from a file
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --ftp-host string FTP host to connect to
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --ftp-pass string FTP password
--rc-client-ca string Client certificate authority to verify clients with --ftp-port string FTP port, leave blank to use default (21)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --ftp-user string FTP username, leave blank for current username, ncw
--rc-key string SSL PEM Private key --gcs-bucket-acl string Access Control List for new buckets.
--rc-max-header-bytes int Maximum size of request header (default 4096) --gcs-client-id string Google Application Client Id
--rc-pass string Password for authentication. --gcs-client-secret string Google Application Client Secret
--rc-realm string realm for authentication (default "rclone") --gcs-location string Location for the newly created buckets.
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --gcs-object-acl string Access Control List for new objects.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --gcs-project-number string Project number.
--rc-user string User name for authentication. --gcs-service-account-file string Service Account Credentials JSON file path
--retries int Retry operations this many times if they fail (default 3) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --http-url string URL of http host to connect to
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3 --hubic-client-id string Hubic Client Id
--s3-chunk-size int Chunk size to use for uploading (default 5M) --hubic-client-secret string Hubic Client Secret
--s3-disable-checksum Don't store MD5 checksum with object metadata --ignore-checksum Skip post copy check of checksums.
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA) --ignore-errors delete even if there are I/O errors
--s3-upload-concurrency int Concurrency for multipart uploads (default 2) --ignore-existing Skip all files that exist on destination
--sftp-ask-password Allow asking for SFTP password when needed. --ignore-size Ignore size when skipping use mod-time or checksum.
--size-only Skip based on size only, not mod-time or checksum -I, --ignore-times Don't skip files that match size and time - transfer all files
--skip-links Don't warn about skipped symlinks. --immutable Do not modify files. Fail if existing files have been modified.
--ssh-path-override string Override path used by SSH connection. --include stringArray Include files matching pattern
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --include-from stringArray Read include patterns from file
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --jottacloud-mountpoint string The mountpoint to use.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --jottacloud-pass string Password.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --jottacloud-user string User Name
--suffix string Suffix for use with --backup-dir. --local-no-check-updated Don't check to see if the files change during upload
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--syslog Use Syslog for logging --local-nounc string Disable UNC (long path names) conversion on Windows
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --log-file string Log everything to this file
--timeout duration IO idle timeout (default 5m0s) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--tpslimit float Limit HTTP transactions per second to this. --low-level-retries int Number of low level retries to do. (default 10)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--track-renames When synchronizing, track file renames and do a server side move if possible --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--transfers int Number of file transfers to run in parallel. (default 4) --max-delete int When synchronizing, limit the number of deletes (default -1)
-u, --update Skip files that are newer on the destination. --max-depth int If set limits the recursion depth to this. (default -1)
--use-server-modtime Use server modified time instead of object metadata --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42") --max-transfer int Maximum size of data to transfer. (default off)
-v, --verbose count Print lots more stuff (repeat for more) --mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42 * [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018 ###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1 +1 @@
v1.42 v1.43

View File

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
var Version = "v1.42-DEV" var Version = "v1.43"

2016
rclone.1

File diff suppressed because it is too large Load Diff