Version v1.43

This commit is contained in:
Nick Craig-Wood 2018-09-01 12:58:00 +01:00
parent a3fec7f030
commit 20c55a6829
67 changed files with 23364 additions and 12145 deletions

File diff suppressed because it is too large Load Diff

1192
MANUAL.md

File diff suppressed because it is too large Load Diff

1320
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@ -1,13 +1,123 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2018-04-28"
date: "2018-09-01"
---
Changelog
---------
# Changelog
## v1.42 - 2018-09-01
* New backends
* Jottacloud (Sebastian Bünger)
* New commands
* copyurl: copies a URL to a remote (Denis)
* New Features
* Reworked config for backends (Nick Craig-Wood)
* All backend config can now be supplied by command line, env var or config file
* Advanced section in the config wizard for the optional items
* A large step towards rclone backends being usable in other go software
* Allow on the fly remotes with :backend: syntax
* Stats revamp
* Add `--progress`/`-P` flag to show interactive progress (Nick Craig-Wood)
* Show the total progress of the sync in the stats (Nick Craig-Wood)
* Add `--stats-one-line` flag for single line stats (Nick Craig-Wood)
* Added weekday schedule into `--bwlimit` (Mateusz)
* lsjson: Add option to show the original object IDs (Fabian Möller)
* serve webdav: Make Content-Type without reading the file and add `--etag-hash` (Nick Craig-Wood)
* build
* Build macOS with native compiler (Nick Craig-Wood)
* Update to use go1.11 for the build (Nick Craig-Wood)
* rc
* Added core/stats to return the stats (reddi1)
* `version --check`: Prints the current release and beta versions (Nick Craig-Wood)
* Bug Fixes
* accounting
* Fix time to completion estimates (Nick Craig-Wood)
* Fix moving average speed for file stats (Nick Craig-Wood)
* config: Fix error reading password from piped input (Nick Craig-Wood)
* move: Fix `--delete-empty-src-dirs` flag to delete all empty dirs on move (ishuah)
* Mount
* Implement `--daemon-timeout` flag for OSXFUSE (Nick Craig-Wood)
* Fix mount `--daemon` not working with encrypted config (Alex Chen)
* Clip the number of blocks to 2^32-1 on macOS - fixes borg backup (Nick Craig-Wood)
* VFS
* Enable vfs-read-chunk-size by default (Fabian Möller)
* Add the vfs/refresh rc command (Fabian Möller)
* Add non recursive mode to vfs/refresh rc command (Fabian Möller)
* Try to seek buffer on read only files (Fabian Möller)
* Local
* Fix crash when deprecated `--local-no-unicode-normalization` is supplied (Nick Craig-Wood)
* Fix mkdir error when trying to copy files to the root of a drive on windows (Nick Craig-Wood)
* Cache
* Fix nil pointer deref when using lsjson on cached directory (Nick Craig-Wood)
* Fix nil pointer deref for occasional crash on playback (Nick Craig-Wood)
* Crypt
* Fix accounting when checking hashes on upload (Nick Craig-Wood)
* Amazon Cloud Drive
* Make very clear in the docs that rclone has no ACD keys (Nick Craig-Wood)
* Azure Blob
* Add connection string and SAS URL auth (Nick Craig-Wood)
* List the container to see if it exists (Nick Craig-Wood)
* Port new Azure Blob Storage SDK (sandeepkru)
* Added blob tier, tier between Hot, Cool and Archive. (sandeepkru)
* Remove leading / from paths (Nick Craig-Wood)
* B2
* Support Application Keys (Nick Craig-Wood)
* Remove leading / from paths (Nick Craig-Wood)
* Box
* Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood)
* Make `--box-commit-retries` flag defaulting to 100 to fix large uploads (Nick Craig-Wood)
* Drive
* Add `--drive-keep-revision-forever` flag (lewapm)
* Handle gdocs when filtering file names in list (Fabian Möller)
* Support using `--fast-list` for large speedups (Fabian Möller)
* FTP
* Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood)
* Google Cloud Storage
* Fix index out of range error with `--fast-list` (Nick Craig-Wood)
* Jottacloud
* Fix MD5 error check (Oliver Heyme)
* Handle empty time values (Martin Polden)
* Calculate missing MD5s (Oliver Heyme)
* Docs, fixes and tests for MD5 calculation (Nick Craig-Wood)
* Add optional MimeTyper interface. (Sebastian Bünger)
* Implement optional About interface (for `df` support). (Sebastian Bünger)
* Mega
* Wait for events instead of arbitrary sleeping (Nick Craig-Wood)
* Add `--mega-hard-delete` flag (Nick Craig-Wood)
* Fix failed logins with upper case chars in email (Nick Craig-Wood)
* Onedrive
* Shared folder support (Yoni Jah)
* Implement DirMove (Cnly)
* Fix rmdir sometimes deleting directories with contents (Nick Craig-Wood)
* Pcloud
* Delete half uploaded files on upload error (Nick Craig-Wood)
* Qingstor
* Remove leading / from paths (Nick Craig-Wood)
* S3
* Fix index out of range error with `--fast-list` (Nick Craig-Wood)
* Add `--s3-force-path-style` (Nick Craig-Wood)
* Add support for KMS Key ID (bsteiss)
* Remove leading / from paths (Nick Craig-Wood)
* Swift
* Add `storage_policy` (Ruben Vandamme)
* Make it so just `storage_url` or `auth_token` can be overidden (Nick Craig-Wood)
* Fix server side copy bug for unusal file names (Nick Craig-Wood)
* Remove leading / from paths (Nick Craig-Wood)
* WebDAV
* Ensure we call MKCOL with a URL with a trailing / for QNAP interop (Nick Craig-Wood)
* If root ends with / then don't check if it is a file (Nick Craig-Wood)
* Don't accept redirects when reading metadata (Nick Craig-Wood)
* Add bearer token (Macaroon) support for dCache (Nick Craig-Wood)
* Document dCache and Macaroons (Onno Zweers)
* Sharepoint recursion with different depth (Henning)
* Attempt to remove failed uploads (Nick Craig-Wood)
* Yandex
* Fix listing/deleting files in the root (Nick Craig-Wood)
## v1.42 - 2018-06-16
* v1.42 - 2018-06-16
* New backends
* OpenDrive (Oliver Heyme, Jakub Karlicek, ncw)
* New commands
@ -86,7 +196,9 @@ Changelog
* Add workarounds for biz.mail.ru
* Ignore Reason-Phrase in status line to fix 4shared (Rodrigo)
* Better error message generation
* v1.41 - 2018-04-28
## v1.41 - 2018-04-28
* New backends
* Mega support added
* Webdav now supports SharePoint cookie authentication (hensur)
@ -178,7 +290,9 @@ Changelog
* Webdav
* Support SharePoint cookie authentication (hensur)
* Strip leading and trailing / off root
* v1.40 - 2018-03-19
## v1.40 - 2018-03-19
* New backends
* Alias backend to create aliases for existing remote names (Fabian Möller)
* New commands
@ -324,7 +438,9 @@ Changelog
* this makes rclone do one less request per invocation
* Webdav
* Add new time formats to support mydrive.ch and others
* v1.39 - 2017-12-23
## v1.39 - 2017-12-23
* New backends
* WebDAV
* tested with nextcloud, owncloud, put.io and others!
@ -400,7 +516,9 @@ Changelog
* Implement DirChangeNotify (Fabian Möller)
* onedrive
* Add option to choose resourceURL during setup of OneDrive Business account if more than one is available for user
* v1.38 - 2017-09-30
## v1.38 - 2017-09-30
* New backends
* Azure Blob Storage (thanks Andrei Dragomir)
* Box
@ -466,7 +584,9 @@ Changelog
* Fix URL encoding issues
* Fix directories with `:` in
* Fix panic with URL encoded content
* v1.37 - 2017-07-22
## v1.37 - 2017-07-22
* New backends
* FTP - thanks to Antonio Messina
* HTTP - thanks to Vasiliy Tolstov
@ -569,7 +689,9 @@ Changelog
* Fix under Windows
* Fix ssh agent on Windows
* Adapt to latest version of library - Igor Kharin
* v1.36 - 2017-03-18
## v1.36 - 2017-03-18
* New Features
* SFTP remote (Jack Schmidt)
* Re-implement sync routine to work a directory at a time reducing memory usage
@ -652,7 +774,9 @@ Changelog
* Fix depth 1 listing
* S3
* Added ca-central-1 region (Jon Yergatian)
* v1.35 - 2017-01-02
## v1.35 - 2017-01-02
* New Features
* moveto and copyto commands for choosing a destination name on copy/move
* rmdirs command to recursively delete empty directories
@ -690,7 +814,9 @@ Changelog
* Drive
* Make DirMove more efficient and complain about moving the root
* Create destination directory on Move()
* v1.34 - 2016-11-06
## v1.34 - 2016-11-06
* New Features
* Stop single file and `--files-from` operations iterating through the source bucket.
* Stop removing failed upload to cloud storage remotes
@ -746,7 +872,9 @@ Changelog
* add `.epub`, `.odp` and `.tsv` as export formats.
* Swift
* Don't read metadata for directory marker objects
* v1.33 - 2016-08-24
## v1.33 - 2016-08-24
* New Features
* Implement encryption
* data encrypted in NACL secretbox format
@ -776,10 +904,14 @@ Changelog
* local
* Fix filenames with invalid UTF-8 not being uploaded
* Fix problem with some UTF-8 characters on OS X
* v1.32 - 2016-07-13
## v1.32 - 2016-07-13
* Backblaze B2
* Fix upload of files large files not in root
* v1.31 - 2016-07-13
## v1.31 - 2016-07-13
* New Features
* Reduce memory on sync by about 50%
* Implement --no-traverse flag to stop copy traversing the destination remote.
@ -815,7 +947,9 @@ Changelog
* Make upload multi-threaded.
* Dropbox
* Don't retry 461 errors.
* v1.30 - 2016-06-18
## v1.30 - 2016-06-18
* New Features
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
* Directory include filtering for efficiency
@ -851,7 +985,9 @@ Changelog
* Swift
* Add auth version parameter
* Add domain option for openstack (v3 auth) - thanks Fabian Ruff
* v1.29 - 2016-04-18
## v1.29 - 2016-04-18
* New Features
* Implement `-I, --ignore-times` for unconditional upload
* Improve `dedupe`command
@ -885,7 +1021,9 @@ Changelog
* Don't return an MD5SUM for static large objects
* S3
* Fix uploading files bigger than 50GB
* v1.28 - 2016-03-01
## v1.28 - 2016-03-01
* New Features
* Configuration file encryption - thanks Klaus Post
* Improve `rclone config` adding more help and making it easier to understand
@ -913,7 +1051,9 @@ Changelog
* Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
* Amazon Drive
* Retry on more things to make directory listings more reliable
* v1.27 - 2016-01-31
## v1.27 - 2016-01-31
* New Features
* Easier headless configuration with `rclone authorize`
* Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.
@ -942,7 +1082,9 @@ Changelog
* Fix updating of mod times of files with `+` in.
* Local
* Add local file system option to disable UNC on Windows.
* v1.26 - 2016-01-02
## v1.26 - 2016-01-02
* New Features
* Yandex storage backend - thank you Dmitry Burdeev ("dibu")
* Implement Backblaze B2 storage backend
@ -958,7 +1100,9 @@ Changelog
* Don't mask HTTP error codes with JSON decode error
* S3
* Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)
* v1.25 - 2015-11-14
## v1.25 - 2015-11-14
* New features
* Implement Hubic storage system
* Fixes
@ -971,7 +1115,9 @@ Changelog
* Use ContentType from Object to avoid lookups in listings
* OneDrive
* disable server side copy as it seems to be broken at Microsoft
* v1.24 - 2015-11-07
## v1.24 - 2015-11-07
* New features
* Add support for Microsoft OneDrive
* Add `--no-check-certificate` option to disable server certificate verification
@ -986,7 +1132,9 @@ Changelog
* Don't delete the bucket if fs wasn't at root
* Google Cloud Storage
* Don't delete the bucket if fs wasn't at root
* v1.23 - 2015-10-03
## v1.23 - 2015-10-03
* New features
* Implement `rclone size` for measuring remotes
* Fixes
@ -998,11 +1146,15 @@ Changelog
* Swift
* Stop chunked operations logging "Failed to read info: Object Not Found"
* Use Content-Length on uploads for extra reliability
* v1.22 - 2015-09-28
## v1.22 - 2015-09-28
* Implement rsync like include and exclude flags
* swift
* Support files > 5GB - thanks Sergey Tolmachev
* v1.21 - 2015-09-22
## v1.21 - 2015-09-22
* New features
* Display individual transfer progress
* Make lsl output times in localtime
@ -1012,7 +1164,9 @@ Changelog
* Implement compliant pacing scheme
* Google Drive
* Make directory reads concurrent for increased speed.
* v1.20 - 2015-09-15
## v1.20 - 2015-09-15
* New features
* Amazon Drive support
* Oauth support redone - fix many bugs and improve usability
@ -1027,7 +1181,9 @@ Changelog
* dropbox
* force use of our custom transport which makes timeouts work
* Thanks to Klaus Post for lots of help with this release
* v1.19 - 2015-08-28
## v1.19 - 2015-08-28
* New features
* Server side copies for s3/swift/drive/dropbox/gcs
* Move command - uses server side copies if it can
@ -1039,7 +1195,9 @@ Changelog
* dropbox
* Increase chunk size to improve upload speeds massively
* Issue an error message when trying to upload bad file name
* v1.18 - 2015-08-17
## v1.18 - 2015-08-17
* drive
* Add `--drive-use-trash` flag so rclone trashes instead of deletes
* Add "Forbidden to download" message for files with no downloadURL
@ -1053,39 +1211,52 @@ Changelog
* **NB** will most likely require you to delete and recreate remote
* enable multipart upload which enables files > 5GB
* tested with Ceph / RadosGW / S3 emulation
* many thanks to Sam Liston and Brian Haymore at the [Utah
Center for High Performance Computing](https://www.chpc.utah.edu/) for a Ceph test account
* many thanks to Sam Liston and Brian Haymore at the [Utah Center for High Performance Computing](https://www.chpc.utah.edu/) for a Ceph test account
* misc
* Show errors when reading the config file
* Do not print stats in quiet mode - thanks Leonid Shalupov
* Add FAQ
* Fix created directories not obeying umask
* Linux installation instructions - thanks Shimon Doodkin
* v1.17 - 2015-06-14
## v1.17 - 2015-06-14
* dropbox: fix case insensitivity issues - thanks Leonid Shalupov
* v1.16 - 2015-06-09
## v1.16 - 2015-06-09
* Fix uploading big files which was causing timeouts or panics
* Don't check md5sum after download with --size-only
* v1.15 - 2015-06-06
## v1.15 - 2015-06-06
* Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper
* Implement --size-only flag to sync on size not checksum & modtime
* Expand docs and remove duplicated information
* Document rclone's limitations with directories
* dropbox: update docs about case insensitivity
* v1.14 - 2015-05-21
## v1.14 - 2015-05-21
* local: fix encoding of non utf-8 file names - fixes a duplicate file problem
* drive: docs about rate limiting
* google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1"
* v1.13 - 2015-05-10
## v1.13 - 2015-05-10
* Revise documentation (especially sync)
* Implement --timeout and --conntimeout
* s3: ignore etags from multipart uploads which aren't md5sums
* v1.12 - 2015-03-15
## v1.12 - 2015-03-15
* drive: Use chunked upload for files above a certain size
* drive: add --drive-chunk-size and --drive-upload-cutoff parameters
* drive: switch to insert from update when a failed copy deletes the upload
* core: Log duplicate files if they are detected
* v1.11 - 2015-03-04
## v1.11 - 2015-03-04
* swift: add region parameter
* drive: fix crash on failed to update remote mtime
* In remote paths, change native directory separators to /
@ -1094,24 +1265,36 @@ Changelog
* Add --log-file flag to log everything (including panics) to file
* Make it possible to disable stats printing with --stats=0
* Implement --bwlimit to limit data transfer bandwidth
* v1.10 - 2015-02-12
## v1.10 - 2015-02-12
* s3: list an unlimited number of items
* Fix getting stuck in the configurator
* v1.09 - 2015-02-07
## v1.09 - 2015-02-07
* windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:)
* local: Fix directory separators on Windows
* drive: fix rate limit exceeded errors
* v1.08 - 2015-02-04
## v1.08 - 2015-02-04
* drive: fix subdirectory listing to not list entire drive
* drive: Fix SetModTime
* dropbox: adapt code to recent library changes
* v1.07 - 2014-12-23
## v1.07 - 2014-12-23
* google cloud storage: fix memory leak
* v1.06 - 2014-12-12
## v1.06 - 2014-12-12
* Fix "Couldn't find home directory" on OSX
* swift: Add tenant parameter
* Use new location of Google API packages
* v1.05 - 2014-08-09
## v1.05 - 2014-08-09
* Improved tests and consequently lots of minor fixes
* core: Fix race detected by go race detector
* core: Fixes after running errcheck
@ -1122,50 +1305,83 @@ Changelog
* s3: make reading metadata more reliable to work around eventual consistency problems
* s3: strip trailing / from ListDir()
* swift: return directories without / in ListDir
* v1.04 - 2014-07-21
## v1.04 - 2014-07-21
* google cloud storage: Fix crash on Update
* v1.03 - 2014-07-20
## v1.03 - 2014-07-20
* swift, s3, dropbox: fix updated files being marked as corrupted
* Make compile with go 1.1 again
* v1.02 - 2014-07-19
## v1.02 - 2014-07-19
* Implement Dropbox remote
* Implement Google Cloud Storage remote
* Verify Md5sums and Sizes after copies
* Remove times from "ls" command - lists sizes only
* Add add "lsl" - lists times and sizes
* Add "md5sum" command
* v1.01 - 2014-07-04
## v1.01 - 2014-07-04
* drive: fix transfer of big files using up lots of memory
* v1.00 - 2014-07-03
## v1.00 - 2014-07-03
* drive: fix whole second dates
* v0.99 - 2014-06-26
## v0.99 - 2014-06-26
* Fix --dry-run not working
* Make compatible with go 1.1
* v0.98 - 2014-05-30
## v0.98 - 2014-05-30
* s3: Treat missing Content-Length as 0 for some ceph installations
* rclonetest: add file with a space in
* v0.97 - 2014-05-05
## v0.97 - 2014-05-05
* Implement copying of single files
* s3 & swift: support paths inside containers/buckets
* v0.96 - 2014-04-24
## v0.96 - 2014-04-24
* drive: Fix multiple files of same name being created
* drive: Use o.Update and fs.Put to optimise transfers
* Add version number, -V and --version
* v0.95 - 2014-03-28
## v0.95 - 2014-03-28
* rclone.org: website, docs and graphics
* drive: fix path parsing
* v0.94 - 2014-03-27
## v0.94 - 2014-03-27
* Change remote format one last time
* GNU style flags
* v0.93 - 2014-03-16
## v0.93 - 2014-03-16
* drive: store token in config file
* cross compile other versions
* set strict permissions on config file
* v0.92 - 2014-03-15
## v0.92 - 2014-03-15
* Config fixes and --config option
* v0.91 - 2014-03-15
## v0.91 - 2014-03-15
* Make config file
* v0.90 - 2013-06-27
## v0.90 - 2013-06-27
* Project named rclone
* v0.00 - 2012-11-18
## v0.00 - 2012-11-18
* Project started

View File

@ -1,12 +1,12 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
Sync files and directories to and from local and remote object stores - v1.42
Sync files and directories to and from local and remote object stores - v1.43
### Synopsis
@ -24,6 +24,7 @@ from various cloud storage systems and using file transfer services, such as:
* Google Drive
* HTTP
* Hubic
* Jottacloud
* Mega
* Microsoft Azure Blob Storage
* Microsoft OneDrive
@ -59,36 +60,56 @@ rclone [flags]
### Options
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -97,26 +118,39 @@ rclone [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -128,9 +162,22 @@ rclone [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
-h, --help help for rclone
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -139,17 +186,26 @@ rclone [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -158,8 +214,21 @@ rclone [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -175,22 +244,56 @@ rclone [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -200,9 +303,16 @@ rclone [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
@ -216,12 +326,13 @@ rclone [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
* [rclone copyurl](/commands/rclone_copyurl/) - Copy url content to dest.
* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote.
* [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names.
* [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbox hash file for all the objects in the path.
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files and delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file path from remote.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone hashsum](/commands/rclone_hashsum/) - Produces an hashsum file for all the objects in the path.
@ -252,4 +363,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/
@ -69,36 +69,56 @@ rclone about remote: [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -107,26 +127,39 @@ rclone about remote: [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -138,8 +171,21 @@ rclone about remote: [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -148,17 +194,26 @@ rclone about remote: [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -167,8 +222,21 @@ rclone about remote: [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -184,22 +252,56 @@ rclone about remote: [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -209,12 +311,19 @@ rclone about remote: [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@ -28,36 +28,56 @@ rclone authorize [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -66,26 +86,39 @@ rclone authorize [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -97,8 +130,21 @@ rclone authorize [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -107,17 +153,26 @@ rclone authorize [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -126,8 +181,21 @@ rclone authorize [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -143,22 +211,56 @@ rclone authorize [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -168,12 +270,19 @@ rclone authorize [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/
@ -27,36 +27,56 @@ rclone cachestats source: [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -65,26 +85,39 @@ rclone cachestats source: [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -96,8 +129,21 @@ rclone cachestats source: [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -106,17 +152,26 @@ rclone cachestats source: [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -125,8 +180,21 @@ rclone cachestats source: [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -142,22 +210,56 @@ rclone cachestats source: [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -167,12 +269,19 @@ rclone cachestats source: [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@ -49,36 +49,56 @@ rclone cat remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -87,26 +107,39 @@ rclone cat remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -118,8 +151,21 @@ rclone cat remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -128,17 +174,26 @@ rclone cat remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -147,8 +202,21 @@ rclone cat remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -164,22 +232,56 @@ rclone cat remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -189,12 +291,19 @@ rclone cat remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@ -43,36 +43,56 @@ rclone check source:path dest:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -81,26 +101,39 @@ rclone check source:path dest:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -112,8 +145,21 @@ rclone check source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -122,17 +168,26 @@ rclone check source:path dest:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -141,8 +196,21 @@ rclone check source:path dest:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -158,22 +226,56 @@ rclone check source:path dest:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -183,12 +285,19 @@ rclone check source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@ -28,36 +28,56 @@ rclone cleanup remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -66,26 +86,39 @@ rclone cleanup remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -97,8 +130,21 @@ rclone cleanup remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -107,17 +153,26 @@ rclone cleanup remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -126,8 +181,21 @@ rclone cleanup remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -143,22 +211,56 @@ rclone cleanup remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -168,12 +270,19 @@ rclone cleanup remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@ -28,36 +28,56 @@ rclone config [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -66,26 +86,39 @@ rclone config [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -97,8 +130,21 @@ rclone config [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -107,17 +153,26 @@ rclone config [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -126,8 +181,21 @@ rclone config [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -143,22 +211,56 @@ rclone config [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -168,13 +270,20 @@ rclone config [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote <name>.
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
@ -185,4 +294,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/
@ -33,36 +33,56 @@ rclone config create <name> <type> [<key> <value>]* [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -71,26 +91,39 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -102,8 +135,21 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -112,17 +158,26 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -131,8 +186,21 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -148,22 +216,56 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -173,12 +275,19 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/
@ -25,36 +25,56 @@ rclone config delete <name> [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +83,39 @@ rclone config delete <name> [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +127,21 @@ rclone config delete <name> [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +150,26 @@ rclone config delete <name> [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +178,21 @@ rclone config delete <name> [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +208,56 @@ rclone config delete <name> [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +267,19 @@ rclone config delete <name> [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/
@ -25,36 +25,56 @@ rclone config dump [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +83,39 @@ rclone config dump [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +127,21 @@ rclone config dump [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +150,26 @@ rclone config dump [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +178,21 @@ rclone config dump [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +208,56 @@ rclone config dump [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +267,19 @@ rclone config dump [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/
@ -28,36 +28,56 @@ rclone config edit [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -66,26 +86,39 @@ rclone config edit [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -97,8 +130,21 @@ rclone config edit [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -107,17 +153,26 @@ rclone config edit [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -126,8 +181,21 @@ rclone config edit [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -143,22 +211,56 @@ rclone config edit [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -168,12 +270,19 @@ rclone config edit [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/
@ -25,36 +25,56 @@ rclone config file [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +83,39 @@ rclone config file [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +127,21 @@ rclone config file [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +150,26 @@ rclone config file [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +178,21 @@ rclone config file [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +208,56 @@ rclone config file [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +267,19 @@ rclone config file [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/
@ -32,36 +32,56 @@ rclone config password <name> [<key> <value>]+ [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -70,26 +90,39 @@ rclone config password <name> [<key> <value>]+ [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -101,8 +134,21 @@ rclone config password <name> [<key> <value>]+ [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -111,17 +157,26 @@ rclone config password <name> [<key> <value>]+ [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -130,8 +185,21 @@ rclone config password <name> [<key> <value>]+ [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -147,22 +215,56 @@ rclone config password <name> [<key> <value>]+ [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -172,12 +274,19 @@ rclone config password <name> [<key> <value>]+ [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/
@ -25,36 +25,56 @@ rclone config providers [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +83,39 @@ rclone config providers [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +127,21 @@ rclone config providers [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +150,26 @@ rclone config providers [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +178,21 @@ rclone config providers [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +208,56 @@ rclone config providers [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +267,19 @@ rclone config providers [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/
@ -25,36 +25,56 @@ rclone config show [<remote>] [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +83,39 @@ rclone config show [<remote>] [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +127,21 @@ rclone config show [<remote>] [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +150,26 @@ rclone config show [<remote>] [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +178,21 @@ rclone config show [<remote>] [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +208,56 @@ rclone config show [<remote>] [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +267,19 @@ rclone config show [<remote>] [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/
@ -32,36 +32,56 @@ rclone config update <name> [<key> <value>]+ [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -70,26 +90,39 @@ rclone config update <name> [<key> <value>]+ [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -101,8 +134,21 @@ rclone config update <name> [<key> <value>]+ [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -111,17 +157,26 @@ rclone config update <name> [<key> <value>]+ [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -130,8 +185,21 @@ rclone config update <name> [<key> <value>]+ [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -147,22 +215,56 @@ rclone config update <name> [<key> <value>]+ [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -172,12 +274,19 @@ rclone config update <name> [<key> <value>]+ [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@ -61,36 +61,56 @@ rclone copy source:path dest:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -99,26 +119,39 @@ rclone copy source:path dest:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -130,8 +163,21 @@ rclone copy source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -140,17 +186,26 @@ rclone copy source:path dest:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -159,8 +214,21 @@ rclone copy source:path dest:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -176,22 +244,56 @@ rclone copy source:path dest:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -201,12 +303,19 @@ rclone copy source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@ -51,36 +51,56 @@ rclone copyto source:path dest:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -89,26 +109,39 @@ rclone copyto source:path dest:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -120,8 +153,21 @@ rclone copyto source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -130,17 +176,26 @@ rclone copyto source:path dest:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -149,8 +204,21 @@ rclone copyto source:path dest:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -166,22 +234,56 @@ rclone copyto source:path dest:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -191,12 +293,19 @@ rclone copyto source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -0,0 +1,288 @@
---
date: 2018-09-01T12:54:54+01:00
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/
---
## rclone copyurl
Copy url content to dest.
### Synopsis
Download urls content and copy it to destination
without saving it in tmp storage.
```
rclone copyurl https://example.com dest:path [flags]
```
### Options
```
-h, --help help for copyurl
```
### Options inherited from parent commands
```
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
--fast-list Use recursive list if available. Uses more memory but fewer transactions.
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
@ -53,36 +53,56 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -91,26 +111,39 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -122,8 +155,21 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -132,17 +178,26 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -151,8 +206,21 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -168,22 +236,56 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -193,12 +295,19 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
@ -37,36 +37,56 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -75,26 +95,39 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -106,8 +139,21 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -116,17 +162,26 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -135,8 +190,21 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -152,22 +220,56 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -177,12 +279,19 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/
@ -30,36 +30,56 @@ rclone dbhashsum remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -68,26 +88,39 @@ rclone dbhashsum remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -99,8 +132,21 @@ rclone dbhashsum remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -109,17 +155,26 @@ rclone dbhashsum remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -128,8 +183,21 @@ rclone dbhashsum remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -145,22 +213,56 @@ rclone dbhashsum remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -170,12 +272,19 @@ rclone dbhashsum remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@ -106,36 +106,56 @@ rclone dedupe [mode] remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -144,26 +164,39 @@ rclone dedupe [mode] remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -175,8 +208,21 @@ rclone dedupe [mode] remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -185,17 +231,26 @@ rclone dedupe [mode] remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -204,8 +259,21 @@ rclone dedupe [mode] remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -221,22 +289,56 @@ rclone dedupe [mode] remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -246,12 +348,19 @@ rclone dedupe [mode] remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@ -42,36 +42,56 @@ rclone delete remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -80,26 +100,39 @@ rclone delete remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -111,8 +144,21 @@ rclone delete remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -121,17 +167,26 @@ rclone delete remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -140,8 +195,21 @@ rclone delete remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -157,22 +225,56 @@ rclone delete remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -182,12 +284,19 @@ rclone delete remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,17 +1,17 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone deletefile"
slug: rclone_deletefile
url: /commands/rclone_deletefile/
---
## rclone deletefile
Remove a single file path from remote.
Remove a single file from remote.
### Synopsis
Remove a single file path from remote. Unlike `delete` it cannot be used to
Remove a single file from remote. Unlike `delete` it cannot be used to
remove a directory and it doesn't obey include/exclude filters - if the specified file exists,
it will always be removed.
@ -29,36 +29,56 @@ rclone deletefile remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -67,26 +87,39 @@ rclone deletefile remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -98,8 +131,21 @@ rclone deletefile remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -108,17 +154,26 @@ rclone deletefile remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -127,8 +182,21 @@ rclone deletefile remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -144,22 +212,56 @@ rclone deletefile remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -169,12 +271,19 @@ rclone deletefile remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@ -24,36 +24,56 @@ Run with --help to list the supported shells.
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -62,26 +82,39 @@ Run with --help to list the supported shells.
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -93,8 +126,21 @@ Run with --help to list the supported shells.
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -103,17 +149,26 @@ Run with --help to list the supported shells.
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -122,8 +177,21 @@ Run with --help to list the supported shells.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -139,22 +207,56 @@ Run with --help to list the supported shells.
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -164,14 +266,21 @@ Run with --help to list the supported shells.
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/
@ -40,36 +40,56 @@ rclone genautocomplete bash [output_file] [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -78,26 +98,39 @@ rclone genautocomplete bash [output_file] [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -109,8 +142,21 @@ rclone genautocomplete bash [output_file] [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -119,17 +165,26 @@ rclone genautocomplete bash [output_file] [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -138,8 +193,21 @@ rclone genautocomplete bash [output_file] [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -155,22 +223,56 @@ rclone genautocomplete bash [output_file] [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -180,12 +282,19 @@ rclone genautocomplete bash [output_file] [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/
@ -40,36 +40,56 @@ rclone genautocomplete zsh [output_file] [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -78,26 +98,39 @@ rclone genautocomplete zsh [output_file] [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -109,8 +142,21 @@ rclone genautocomplete zsh [output_file] [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -119,17 +165,26 @@ rclone genautocomplete zsh [output_file] [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -138,8 +193,21 @@ rclone genautocomplete zsh [output_file] [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -155,22 +223,56 @@ rclone genautocomplete zsh [output_file] [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -180,12 +282,19 @@ rclone genautocomplete zsh [output_file] [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@ -28,36 +28,56 @@ rclone gendocs output_directory [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -66,26 +86,39 @@ rclone gendocs output_directory [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -97,8 +130,21 @@ rclone gendocs output_directory [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -107,17 +153,26 @@ rclone gendocs output_directory [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -126,8 +181,21 @@ rclone gendocs output_directory [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -143,22 +211,56 @@ rclone gendocs output_directory [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -168,12 +270,19 @@ rclone gendocs output_directory [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone hashsum"
slug: rclone_hashsum
url: /commands/rclone_hashsum/
@ -42,36 +42,56 @@ rclone hashsum <hash> remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -80,26 +100,39 @@ rclone hashsum <hash> remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -111,8 +144,21 @@ rclone hashsum <hash> remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -121,17 +167,26 @@ rclone hashsum <hash> remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -140,8 +195,21 @@ rclone hashsum <hash> remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -157,22 +225,56 @@ rclone hashsum <hash> remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -182,12 +284,19 @@ rclone hashsum <hash> remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone link"
slug: rclone_link
url: /commands/rclone_link/
@ -35,36 +35,56 @@ rclone link remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -73,26 +93,39 @@ rclone link remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -104,8 +137,21 @@ rclone link remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -114,17 +160,26 @@ rclone link remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -133,8 +188,21 @@ rclone link remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -150,22 +218,56 @@ rclone link remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -175,12 +277,19 @@ rclone link remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@ -30,36 +30,56 @@ rclone listremotes [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -68,26 +88,39 @@ rclone listremotes [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -99,8 +132,21 @@ rclone listremotes [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -109,17 +155,26 @@ rclone listremotes [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -128,8 +183,21 @@ rclone listremotes [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -145,22 +213,56 @@ rclone listremotes [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -170,12 +272,19 @@ rclone listremotes [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@ -59,36 +59,56 @@ rclone ls remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -97,26 +117,39 @@ rclone ls remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -128,8 +161,21 @@ rclone ls remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -138,17 +184,26 @@ rclone ls remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -157,8 +212,21 @@ rclone ls remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -174,22 +242,56 @@ rclone ls remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -199,12 +301,19 @@ rclone ls remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@ -70,36 +70,56 @@ rclone lsd remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -108,26 +128,39 @@ rclone lsd remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -139,8 +172,21 @@ rclone lsd remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -149,17 +195,26 @@ rclone lsd remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -168,8 +223,21 @@ rclone lsd remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -185,22 +253,56 @@ rclone lsd remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -210,12 +312,19 @@ rclone lsd remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsf"
slug: rclone_lsf
url: /commands/rclone_lsf/
@ -148,36 +148,56 @@ rclone lsf remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -186,26 +206,39 @@ rclone lsf remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -217,8 +250,21 @@ rclone lsf remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -227,17 +273,26 @@ rclone lsf remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -246,8 +301,21 @@ rclone lsf remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -263,22 +331,56 @@ rclone lsf remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -288,12 +390,19 @@ rclone lsf remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/
@ -21,6 +21,7 @@ The output is an array of Items, where each Item looks like this
"DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
},
"ID": "y2djkhiujf83u33",
"OrigID": "UYOJVTUW00Q1RzTDA",
"IsDir" : false,
"MimeType" : "application/octet-stream",
"ModTime" : "2017-05-31T16:15:57.034468261+01:00",
@ -80,42 +81,63 @@ rclone lsjson remote:path [flags]
--hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
--original Show the ID of the underlying Object.
-R, --recursive Recurse into the listing.
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -124,26 +146,39 @@ rclone lsjson remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -155,8 +190,21 @@ rclone lsjson remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -165,17 +213,26 @@ rclone lsjson remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -184,8 +241,21 @@ rclone lsjson remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -201,22 +271,56 @@ rclone lsjson remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -226,12 +330,19 @@ rclone lsjson remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@ -59,36 +59,56 @@ rclone lsl remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -97,26 +117,39 @@ rclone lsl remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -128,8 +161,21 @@ rclone lsl remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -138,17 +184,26 @@ rclone lsl remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -157,8 +212,21 @@ rclone lsl remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -174,22 +242,56 @@ rclone lsl remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -199,12 +301,19 @@ rclone lsl remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@ -28,36 +28,56 @@ rclone md5sum remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -66,26 +86,39 @@ rclone md5sum remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -97,8 +130,21 @@ rclone md5sum remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -107,17 +153,26 @@ rclone md5sum remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -126,8 +181,21 @@ rclone md5sum remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -143,22 +211,56 @@ rclone md5sum remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -168,12 +270,19 @@ rclone md5sum remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@ -25,36 +25,56 @@ rclone mkdir remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +83,39 @@ rclone mkdir remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +127,21 @@ rclone mkdir remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +150,26 @@ rclone mkdir remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +178,21 @@ rclone mkdir remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +208,56 @@ rclone mkdir remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +267,19 @@ rclone mkdir remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@ -182,6 +182,23 @@ Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
@ -284,6 +301,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@ -302,8 +320,8 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks.
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. -1 is unlimited.
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
```
@ -311,36 +329,56 @@ rclone mount remote:path /path/to/mountpoint [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -349,26 +387,39 @@ rclone mount remote:path /path/to/mountpoint [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -380,8 +431,21 @@ rclone mount remote:path /path/to/mountpoint [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -390,17 +454,26 @@ rclone mount remote:path /path/to/mountpoint [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -409,8 +482,21 @@ rclone mount remote:path /path/to/mountpoint [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -426,22 +512,56 @@ rclone mount remote:path /path/to/mountpoint [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -451,12 +571,19 @@ rclone mount remote:path /path/to/mountpoint [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@ -45,36 +45,56 @@ rclone move source:path dest:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -83,26 +103,39 @@ rclone move source:path dest:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -114,8 +147,21 @@ rclone move source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -124,17 +170,26 @@ rclone move source:path dest:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -143,8 +198,21 @@ rclone move source:path dest:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -160,22 +228,56 @@ rclone move source:path dest:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -185,12 +287,19 @@ rclone move source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/
@ -54,36 +54,56 @@ rclone moveto source:path dest:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -92,26 +112,39 @@ rclone moveto source:path dest:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -123,8 +156,21 @@ rclone moveto source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -133,17 +179,26 @@ rclone moveto source:path dest:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -152,8 +207,21 @@ rclone moveto source:path dest:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -169,22 +237,56 @@ rclone moveto source:path dest:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -194,12 +296,19 @@ rclone moveto source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone ncdu"
slug: rclone_ncdu
url: /commands/rclone_ncdu/
@ -52,36 +52,56 @@ rclone ncdu remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -90,26 +110,39 @@ rclone ncdu remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -121,8 +154,21 @@ rclone ncdu remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -131,17 +177,26 @@ rclone ncdu remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -150,8 +205,21 @@ rclone ncdu remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -167,22 +235,56 @@ rclone ncdu remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -192,12 +294,19 @@ rclone ncdu remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/
@ -25,36 +25,56 @@ rclone obscure password [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +83,39 @@ rclone obscure password [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +127,21 @@ rclone obscure password [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +150,26 @@ rclone obscure password [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +178,21 @@ rclone obscure password [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +208,56 @@ rclone obscure password [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +267,19 @@ rclone obscure password [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@ -29,36 +29,56 @@ rclone purge remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -67,26 +87,39 @@ rclone purge remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -98,8 +131,21 @@ rclone purge remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -108,17 +154,26 @@ rclone purge remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -127,8 +182,21 @@ rclone purge remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -144,22 +212,56 @@ rclone purge remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -169,12 +271,19 @@ rclone purge remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rc"
slug: rclone_rc
url: /commands/rclone_rc/
@ -35,36 +35,56 @@ rclone rc commands parameter [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -73,26 +93,39 @@ rclone rc commands parameter [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -104,8 +137,21 @@ rclone rc commands parameter [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -114,17 +160,26 @@ rclone rc commands parameter [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -133,8 +188,21 @@ rclone rc commands parameter [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -150,22 +218,56 @@ rclone rc commands parameter [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -175,12 +277,19 @@ rclone rc commands parameter [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rcat"
slug: rclone_rcat
url: /commands/rclone_rcat/
@ -47,36 +47,56 @@ rclone rcat remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -85,26 +105,39 @@ rclone rcat remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -116,8 +149,21 @@ rclone rcat remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -126,17 +172,26 @@ rclone rcat remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -145,8 +200,21 @@ rclone rcat remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -162,22 +230,56 @@ rclone rcat remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -187,12 +289,19 @@ rclone rcat remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@ -27,36 +27,56 @@ rclone rmdir remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -65,26 +85,39 @@ rclone rmdir remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -96,8 +129,21 @@ rclone rmdir remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -106,17 +152,26 @@ rclone rmdir remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -125,8 +180,21 @@ rclone rmdir remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -142,22 +210,56 @@ rclone rmdir remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -167,12 +269,19 @@ rclone rmdir remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
@ -35,36 +35,56 @@ rclone rmdirs remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -73,26 +93,39 @@ rclone rmdirs remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -104,8 +137,21 @@ rclone rmdirs remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -114,17 +160,26 @@ rclone rmdirs remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -133,8 +188,21 @@ rclone rmdirs remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -150,22 +218,56 @@ rclone rmdirs remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -175,12 +277,19 @@ rclone rmdirs remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone serve"
slug: rclone_serve
url: /commands/rclone_serve/
@ -31,36 +31,56 @@ rclone serve <protocol> [opts] <remote> [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -69,26 +89,39 @@ rclone serve <protocol> [opts] <remote> [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -100,8 +133,21 @@ rclone serve <protocol> [opts] <remote> [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -110,17 +156,26 @@ rclone serve <protocol> [opts] <remote> [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -129,8 +184,21 @@ rclone serve <protocol> [opts] <remote> [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -146,22 +214,56 @@ rclone serve <protocol> [opts] <remote> [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -171,15 +273,22 @@ rclone serve <protocol> [opts] <remote> [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone serve http"
slug: rclone_serve_http
url: /commands/rclone_serve_http/
@ -96,6 +96,23 @@ Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
@ -217,43 +234,63 @@ rclone serve http remote:path [flags]
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks.
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. -1 is unlimited.
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -262,26 +299,39 @@ rclone serve http remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -293,8 +343,21 @@ rclone serve http remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -303,17 +366,26 @@ rclone serve http remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -322,8 +394,21 @@ rclone serve http remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -339,22 +424,56 @@ rclone serve http remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -364,12 +483,19 @@ rclone serve http remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone serve restic"
slug: rclone_serve_restic
url: /commands/rclone_serve_restic/
@ -161,36 +161,56 @@ rclone serve restic remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -199,26 +219,39 @@ rclone serve restic remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -230,8 +263,21 @@ rclone serve restic remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -240,17 +286,26 @@ rclone serve restic remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -259,8 +314,21 @@ rclone serve restic remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -276,22 +344,56 @@ rclone serve restic remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -301,12 +403,19 @@ rclone serve restic remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone serve webdav"
slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/
@ -16,8 +16,19 @@ remote over HTTP via the webdav protocol. This can be viewed with a
webdav client or you can make a remote of type webdav to read and
write it.
NB at the moment each directory listing reads the start of each file
which is undesirable: see https://github.com/golang/go/issues/22577
### Webdav options
#### --etag-hash
This controls the ETag header. Without this flag the ETag will be
based on the ModTime and Size of the object.
If this flag is set to "auto" then rclone will choose the first
supported hash on the backend or you can use a named hash such as
"MD5" or "SHA-1".
Use "rclone hashsum" to see the full list.
### Server options
@ -93,6 +104,23 @@ Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
@ -194,6 +222,7 @@ rclone serve webdav remote:path [flags]
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for webdav
--htpasswd string htpasswd file - if not provided no authentication is done
@ -214,43 +243,63 @@ rclone serve webdav remote:path [flags]
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks.
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. -1 is unlimited.
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -259,26 +308,39 @@ rclone serve webdav remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -290,8 +352,21 @@ rclone serve webdav remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -300,17 +375,26 @@ rclone serve webdav remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -319,8 +403,21 @@ rclone serve webdav remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -336,22 +433,56 @@ rclone serve webdav remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -361,12 +492,19 @@ rclone serve webdav remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@ -28,36 +28,56 @@ rclone sha1sum remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -66,26 +86,39 @@ rclone sha1sum remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -97,8 +130,21 @@ rclone sha1sum remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -107,17 +153,26 @@ rclone sha1sum remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -126,8 +181,21 @@ rclone sha1sum remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -143,22 +211,56 @@ rclone sha1sum remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -168,12 +270,19 @@ rclone sha1sum remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@ -26,36 +26,56 @@ rclone size remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -64,26 +84,39 @@ rclone size remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -95,8 +128,21 @@ rclone size remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -105,17 +151,26 @@ rclone size remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -124,8 +179,21 @@ rclone size remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -141,22 +209,56 @@ rclone size remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -166,12 +268,19 @@ rclone size remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@ -44,36 +44,56 @@ rclone sync source:path dest:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -82,26 +102,39 @@ rclone sync source:path dest:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -113,8 +146,21 @@ rclone sync source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -123,17 +169,26 @@ rclone sync source:path dest:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -142,8 +197,21 @@ rclone sync source:path dest:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -159,22 +227,56 @@ rclone sync source:path dest:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -184,12 +286,19 @@ rclone sync source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone touch"
slug: rclone_touch
url: /commands/rclone_touch/
@ -27,36 +27,56 @@ rclone touch remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -65,26 +85,39 @@ rclone touch remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -96,8 +129,21 @@ rclone touch remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -106,17 +152,26 @@ rclone touch remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -125,8 +180,21 @@ rclone touch remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -142,22 +210,56 @@ rclone touch remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -167,12 +269,19 @@ rclone touch remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone tree"
slug: rclone_tree
url: /commands/rclone_tree/
@ -68,36 +68,56 @@ rclone tree remote:path [flags]
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -106,26 +126,39 @@ rclone tree remote:path [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -137,8 +170,21 @@ rclone tree remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -147,17 +193,26 @@ rclone tree remote:path [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -166,8 +221,21 @@ rclone tree remote:path [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -183,22 +251,56 @@ rclone tree remote:path [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -208,12 +310,19 @@ rclone tree remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1,5 +1,5 @@
---
date: 2018-06-16T18:20:28+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@ -10,7 +10,34 @@ Show the version number.
### Synopsis
Show the version number.
Show the version number, the go version and the architecture.
Eg
$ rclone version
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
If you supply the --check flag, then it will do an online check to
compare your version with the latest release and the latest beta.
$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
beta: 1.42.0.5 (released 2018-06-17)
Or
$ rclone version --check
yours: 1.41
latest: 1.42 (released 2018-06-16)
upgrade: https://downloads.rclone.org/v1.42
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
```
rclone version [flags]
@ -19,42 +46,63 @@ rclone version [flags]
### Options
```
--check Check for new version.
-h, --help help for version
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
--azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
--buffer-size int Buffer size when copying files. (default 16M)
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size string The size of a chunk (default "5M")
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M)
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
--cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s)
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
--cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
@ -63,26 +111,39 @@ rclone version [flags]
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
@ -94,8 +155,21 @@ rclone version [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
--gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
@ -104,17 +178,26 @@ rclone version [flags]
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug If set then output more debug from mega.
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
@ -123,8 +206,21 @@ rclone version [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
@ -140,22 +236,56 @@ rclone version [flags]
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--ssh-path-override string Override path used by SSH connection.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
@ -165,12 +295,19 @@ rclone version [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.42")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.42
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 16-Jun-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@ -1 +1 @@
v1.42
v1.43

View File

@ -1,4 +1,4 @@
package fs
// Version of rclone
var Version = "v1.42-DEV"
var Version = "v1.43"

2016
rclone.1

File diff suppressed because it is too large Load Diff