diff --git a/.github/ISSUE_TEMPLATE/Bug.md b/.github/ISSUE_TEMPLATE/Bug.md index 67241ef9e..5aa049c3d 100644 --- a/.github/ISSUE_TEMPLATE/Bug.md +++ b/.github/ISSUE_TEMPLATE/Bug.md @@ -9,7 +9,7 @@ We understand you are having a problem with rclone; we want to help you with tha **STOP and READ** **YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**: -Please show the effort you've put in to solving the problem and please be specific. +Please show the effort you've put into solving the problem and please be specific. People are volunteering their time to help! Low effort posts are not likely to get good answers! If you think you might have found a bug, try to replicate it with the latest beta (or stable). diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 6181094fc..e58e2d79a 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -223,7 +223,7 @@ find the results at https://pub.rclone.org/integration-tests/ Rclone code is organised into a small number of top level directories with modules beneath. - * backend - the rclone backends for interfacing to cloud providers - + * backend - the rclone backends for interfacing to cloud providers - * all - import this to load all the cloud providers * ...providers * bin - scripts for use while building or maintaining rclone @@ -233,7 +233,7 @@ with modules beneath. * cmdtest - end-to-end tests of commands, flags, environment variables,... * docs - the documentation and website * content - adjust these docs only - everything else is autogenerated - * command - these are auto generated - edit the corresponding .go file + * command - these are auto-generated - edit the corresponding .go file * fs - main rclone definitions - minimal amount of code * accounting - bandwidth limiting and statistics * asyncreader - an io.Reader which reads ahead @@ -299,7 +299,7 @@ the source file in the `Help:` field. countries, it looks better without an ending period/full stop character. The only documentation you need to edit are the `docs/content/*.md` -files. The `MANUAL.*`, `rclone.1`, web site, etc. are all auto generated +files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated from those during the release process. See the `make doc` and `make website` targets in the Makefile if you are interested in how. You don't need to run these when adding a feature. @@ -350,7 +350,7 @@ And here is an example of a longer one: ``` mount: fix hang on errored upload -In certain circumstances if an upload failed then the mount could hang +In certain circumstances, if an upload failed then the mount could hang indefinitely. This was fixed by closing the read pipe after the Put completed. This will cause the write side to return a pipe closed error fixing the hang. @@ -425,8 +425,8 @@ Research Getting going * Create `backend/remote/remote.go` (copy this from a similar remote) - * box is a good one to start from if you have a directory based remote - * b2 is a good one to start from if you have a bucket based remote + * box is a good one to start from if you have a directory-based remote + * b2 is a good one to start from if you have a bucket-based remote * Add your remote to the imports in `backend/all/all.go` * HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead. * Try to implement as many optional methods as possible as it makes the remote more usable. diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 4665b3697..9eaa6da26 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -19,7 +19,7 @@ Current active maintainers of rclone are: **This is a work in progress Draft** -This is a guide for how to be an rclone maintainer. This is mostly a writeup of what I (@ncw) attempt to do. +This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do. ## Triaging Tickets ## @@ -27,15 +27,15 @@ When a ticket comes in it should be triaged. This means it should be classified Rclone uses the labels like this: -* `bug` - a definite verified bug +* `bug` - a definitely verified bug * `can't reproduce` - a problem which we can't reproduce * `doc fix` - a bug in the documentation - if users need help understanding the docs add this label * `duplicate` - normally close these and ask the user to subscribe to the original * `enhancement: new remote` - a new rclone backend * `enhancement` - a new feature * `FUSE` - to do with `rclone mount` command -* `good first issue` - mark these if you find a small self contained issue - these get shown to new visitors to the project -* `help` wanted - mark these if you find a self contained issue - these get shown to new visitors to the project +* `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project +* `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project * `IMPORTANT` - note to maintainers not to forget to fix this for the release * `maintenance` - internal enhancement, code re-organisation, etc. * `Needs Go 1.XX` - waiting for that version of Go to be released @@ -51,7 +51,7 @@ The milestones have these meanings: * v1.XX - stuff we would like to fit into this release * v1.XX+1 - stuff we are leaving until the next release -* Soon - stuff we think is a good idea - waiting to be scheduled to a release +* Soon - stuff we think is a good idea - waiting to be scheduled for a release * Help wanted - blue sky stuff that might get moved up, or someone could help with * Known bugs - bugs waiting on external factors or we aren't going to fix for the moment @@ -65,7 +65,7 @@ Close tickets as soon as you can - make sure they are tagged with a release. Po Try to process pull requests promptly! -Merging pull requests on GitHub itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message. +Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message. After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`. @@ -81,15 +81,15 @@ Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer High impact regressions should be fixed before the next release. -Near the start of the release cycle the dependencies should be updated with `make update` to give time for bugs to surface. +Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface. Towards the end of the release cycle try not to merge anything too big so let things settle down. -Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained. +Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained. ## Mailing list ## -There is now an invite only mailing list for rclone developers `rclone-dev` on google groups. +There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups. ## TODO ## diff --git a/README.md b/README.md index 20f2bc76a..4c44f24fa 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ [Website](https://rclone.org) | [Documentation](https://rclone.org/docs/) | -[Download](https://rclone.org/downloads/) | +[Download](https://rclone.org/downloads/) | [Contributing](CONTRIBUTING.md) | [Changelog](https://rclone.org/changelog/) | [Installation](https://rclone.org/install/) | @@ -10,12 +10,12 @@ [![Build Status](https://github.com/rclone/rclone/workflows/build/badge.svg)](https://github.com/rclone/rclone/actions?query=workflow%3Abuild) [![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone) -[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone) +[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone) [![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone) # Rclone -Rclone *("rsync for cloud storage")* is a command line program to sync files and directories to and from different cloud storage providers. +Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers. ## Storage providers @@ -72,7 +72,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and * Yandex Disk [:page_facing_up:](https://rclone.org/yandex/) * Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/) * The local filesystem [:page_facing_up:](https://rclone.org/local/) - + Please see [the full list of all storage providers and their features](https://rclone.org/overview/) ## Features diff --git a/backend/crypt/cipher.go b/backend/crypt/cipher.go index 470da07a0..de0bbd055 100644 --- a/backend/crypt/cipher.go +++ b/backend/crypt/cipher.go @@ -99,7 +99,7 @@ func NewNameEncryptionMode(s string) (mode NameEncryptionMode, err error) { return mode, err } -// String turns mode into a human readable string +// String turns mode into a human-readable string func (mode NameEncryptionMode) String() (out string) { switch mode { case NameEncryptionOff: diff --git a/backend/googlephotos/googlephotos.go b/backend/googlephotos/googlephotos.go index 8c557fe13..9bd4fd44d 100644 --- a/backend/googlephotos/googlephotos.go +++ b/backend/googlephotos/googlephotos.go @@ -139,7 +139,7 @@ you want to read the media.`, Default: false, Help: `Also view and download archived media. -By default rclone does not request archived media. Thus, when syncing, +By default, rclone does not request archived media. Thus, when syncing, archived media is not visible in directory listings or transferred. Note that media in albums is always visible and synced, no matter diff --git a/backend/http/http.go b/backend/http/http.go index f1ef9a170..8b68b0034 100644 --- a/backend/http/http.go +++ b/backend/http/http.go @@ -49,7 +49,7 @@ Use this to set additional HTTP headers for all transactions. The input format is comma separated list of key,value pairs. Standard [CSV encoding](https://godoc.org/encoding/csv) may be used. -For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. +For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'. `, diff --git a/backend/mailru/mailru.go b/backend/mailru/mailru.go index 3ab0c0cfc..9aa969085 100644 --- a/backend/mailru/mailru.go +++ b/backend/mailru/mailru.go @@ -269,7 +269,7 @@ func errorHandler(res *http.Response) (err error) { } serverError.Message = string(data) if serverError.Message == "" || strings.HasPrefix(serverError.Message, "{") { - // Replace empty or JSON response with a human readable text. + // Replace empty or JSON response with a human-readable text. serverError.Message = res.Status } serverError.Status = res.StatusCode diff --git a/backend/mega/mega.go b/backend/mega/mega.go index 269c15695..5615cd640 100644 --- a/backend/mega/mega.go +++ b/backend/mega/mega.go @@ -261,7 +261,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e // splitNodePath splits nodePath into / separated parts, returning nil if it // should refer to the root. -// It also encodes the parts into backend specific encoding +// It also encodes the parts into backend-specific encoding func (f *Fs) splitNodePath(nodePath string) (parts []string) { nodePath = path.Clean(nodePath) if nodePath == "." || nodePath == "/" { @@ -354,7 +354,7 @@ func (f *Fs) mkdir(ctx context.Context, rootNode *mega.Node, dir string) (node * } } if err != nil { - return nil, errors.Wrap(err, "internal error: mkdir called with non existent root node") + return nil, errors.Wrap(err, "internal error: mkdir called with non-existent root node") } // i is number of directories to create (may be 0) // node is directory to create them from diff --git a/backend/onedrive/onedrive.go b/backend/onedrive/onedrive.go index 48e1570e2..7e0b0a823 100755 --- a/backend/onedrive/onedrive.go +++ b/backend/onedrive/onedrive.go @@ -141,7 +141,7 @@ Note that the chunks will be buffered into memory.`, Name: "expose_onenote_files", Help: `Set to make OneNote files show up in directory listings. -By default rclone will hide OneNote files in directory listings because +By default, rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory diff --git a/backend/webdav/webdav.go b/backend/webdav/webdav.go index 7524ffc18..12f312530 100644 --- a/backend/webdav/webdav.go +++ b/backend/webdav/webdav.go @@ -118,7 +118,7 @@ Use this to set additional HTTP headers for all transactions The input format is comma separated list of key,value pairs. Standard [CSV encoding](https://godoc.org/encoding/csv) may be used. -For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. +For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'. `, diff --git a/bin/nfpm.yaml b/bin/nfpm.yaml index 7ce318f79..aeeba018c 100644 --- a/bin/nfpm.yaml +++ b/bin/nfpm.yaml @@ -9,7 +9,7 @@ provides: maintainer: "Nick Craig-Wood " description: | Rclone - "rsync for cloud storage" - is a command line program to sync files and directories to and + is a command-line program to sync files and directories to and from most cloud providers. It can also mount, tree, ncdu and lots of other useful things. vendor: "rclone" diff --git a/cmd/about/about.go b/cmd/about/about.go index 560526131..2e13fd126 100644 --- a/cmd/about/about.go +++ b/cmd/about/about.go @@ -76,7 +76,7 @@ Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g. Trashed: 104857602 Other: 8849156022 -A ` + "`--json`" + ` flag generates conveniently computer readable output, e.g. +A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g. { "total": 18253611008, diff --git a/cmd/backend/backend.go b/cmd/backend/backend.go index 0b00d2389..195758991 100644 --- a/cmd/backend/backend.go +++ b/cmd/backend/backend.go @@ -30,9 +30,9 @@ func init() { var commandDefinition = &cobra.Command{ Use: "backend remote:path [opts] ", - Short: `Run a backend specific command.`, + Short: `Run a backend-specific command.`, Long: ` -This runs a backend specific command. The commands themselves (except +This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions. diff --git a/cmd/check/check.go b/cmd/check/check.go index c17f6e1c0..ec58deb22 100644 --- a/cmd/check/check.go +++ b/cmd/check/check.go @@ -136,7 +136,7 @@ var commandDefinition = &cobra.Command{ Short: `Checks the files in the source and destination match.`, Long: strings.ReplaceAll(` Checks the files in the source and destination match. It compares -sizes and hashes (MD5 or SHA1) and logs a report of files which don't +sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination. If you supply the |--size-only| flag, it will only compare the sizes not diff --git a/cmd/config/config.go b/cmd/config/config.go index 3b27c2042..1c9d31ed5 100644 --- a/cmd/config/config.go +++ b/cmd/config/config.go @@ -214,7 +214,7 @@ var configCreateCommand = &cobra.Command{ Create a new remote of |name| with |type| and options. The options should be passed in pairs of |key| |value| or as |key=value|. -For example to make a swift remote of name myremote using auto config +For example, to make a swift remote of name myremote using auto config you would do: rclone config create myremote swift env_auth true @@ -277,7 +277,7 @@ var configUpdateCommand = &cobra.Command{ Update an existing remote's options. The options should be passed in pairs of |key| |value| or as |key=value|. -For example to update the env_auth field of a remote of name myremote +For example, to update the env_auth field of a remote of name myremote you would do: rclone config update myremote env_auth true @@ -317,7 +317,7 @@ Update an existing remote's password. The password should be passed in pairs of |key| |password| or as |key=password|. The |password| should be passed in in clear (unobscured). -For example to set password of a remote of name myremote you would do: +For example, to set password of a remote of name myremote you would do: rclone config password myremote fieldname mypassword rclone config password myremote fieldname=mypassword diff --git a/cmd/dedupe/dedupe.go b/cmd/dedupe/dedupe.go index 088cd78a4..270634aed 100644 --- a/cmd/dedupe/dedupe.go +++ b/cmd/dedupe/dedupe.go @@ -20,7 +20,7 @@ func init() { cmd.Root.AddCommand(commandDefinition) cmdFlag := commandDefinition.Flags() flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename") - flags.BoolVarP(cmdFlag, &byHash, "by-hash", "", false, "Find indentical hashes rather than names") + flags.BoolVarP(cmdFlag, &byHash, "by-hash", "", false, "Find identical hashes rather than names") } var commandDefinition = &cobra.Command{ @@ -47,7 +47,7 @@ name. It will do this iteratively until all the identically named directories have been merged. Next, if deduping by name, for every group of duplicate file names / -hashes, it will delete all but one identical files it finds without +hashes, it will delete all but one identical file it finds without confirmation. This means that for most duplicated files the ` + "`dedupe`" + ` command will not be interactive. @@ -59,7 +59,7 @@ identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes. Next rclone will resolve the remaining duplicates. Exactly which -action is taken depends on the dedupe mode. By default rclone will +action is taken depends on the dedupe mode. By default, rclone will interactively query the user for each one. **Important**: Since this can cause data loss, test first with the @@ -126,7 +126,7 @@ Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" + * ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different. * ` + "`" + `--dedupe-mode list` + "`" + ` - lists duplicate dirs and files only and changes nothing. -For example to rename all the identically named photos in your Google Photos directory, do +For example, to rename all the identically named photos in your Google Photos directory, do rclone dedupe --dedupe-mode rename "drive:Google Photos" diff --git a/cmd/ls/lshelp/lshelp.go b/cmd/ls/lshelp/lshelp.go index e1f919de7..f4a678be2 100644 --- a/cmd/ls/lshelp/lshelp.go +++ b/cmd/ls/lshelp/lshelp.go @@ -17,15 +17,15 @@ There are several related list commands * |lsf| to list objects and directories in easy to parse format * |lsjson| to list objects and directories in JSON format -|ls|,|lsl|,|lsd| are designed to be human readable. -|lsf| is designed to be human and machine readable. -|lsjson| is designed to be machine readable. +|ls|,|lsl|,|lsd| are designed to be human-readable. +|lsf| is designed to be human and machine-readable. +|lsjson| is designed to be machine-readable. Note that |ls| and |lsl| recurse by default - use |--max-depth 1| to stop the recursion. The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default - use |-R| to make them recurse. -Listing a non existent directory will produce an error except for +Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - -the bucket based remotes). +the bucket-based remotes). `, "|", "`") diff --git a/cmd/lsf/lsf.go b/cmd/lsf/lsf.go index b9450f7ef..6ce79a34a 100644 --- a/cmd/lsf/lsf.go +++ b/cmd/lsf/lsf.go @@ -93,13 +93,13 @@ can be returned as an empty string if it isn't available on the object the object and "UNSUPPORTED" if that object does not support that hash type. -For example to emulate the md5sum command you can use +For example, to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . Eg - $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket + $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket 7908e352297f0f530b84a756f188baa3 bevajer5jef cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 @@ -134,7 +134,7 @@ Eg Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag. -For example to find all the files modified within one day and copy +For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files diff --git a/cmd/lsjson/lsjson.go b/cmd/lsjson/lsjson.go index 1ff4cf069..c2bb8aace 100644 --- a/cmd/lsjson/lsjson.go +++ b/cmd/lsjson/lsjson.go @@ -93,7 +93,7 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. -If the directory is a bucket in a bucket based backend, then +If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true". diff --git a/cmd/mountlib/help.go b/cmd/mountlib/help.go index 6cc35e9ba..641097c9d 100644 --- a/cmd/mountlib/help.go +++ b/cmd/mountlib/help.go @@ -65,7 +65,7 @@ at all, then 1 PiB is set as both the total and the free size. To run rclone @ on Windows, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). -[WinFsp](https://github.com/billziss-gh/winfsp) is an open source +[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse). @@ -235,7 +235,7 @@ applications won't work with their files on an rclone mount without |--vfs-cache-mode writes| or |--vfs-cache-mode full|. See the [VFS File Caching](#vfs-file-caching) section for more info. -The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, +The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. diff --git a/cmd/obscure/obscure.go b/cmd/obscure/obscure.go index 14ef80376..71987079d 100644 --- a/cmd/obscure/obscure.go +++ b/cmd/obscure/obscure.go @@ -18,7 +18,7 @@ func init() { var commandDefinition = &cobra.Command{ Use: "obscure password", Short: `Obscure password for use in the rclone config file.`, - Long: `In the rclone config file, human readable passwords are + Long: `In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is **not** a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" diff --git a/cmd/serve/restic/restic.go b/cmd/serve/restic/restic.go index 43bcadc87..a90b3295a 100644 --- a/cmd/serve/restic/restic.go +++ b/cmd/serve/restic/restic.go @@ -51,7 +51,7 @@ var Command = &cobra.Command{ over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. -[Restic](https://restic.net/) is a command line program for doing +[Restic](https://restic.net/) is a command-line program for doing backups. The server will log errors. Use -v to see access logs. diff --git a/cmd/tree/tree.go b/cmd/tree/tree.go index 8fd75e53f..59eaef48e 100644 --- a/cmd/tree/tree.go +++ b/cmd/tree/tree.go @@ -82,7 +82,7 @@ For example └── subdir ├── file4 └── file5 - + 1 directories, 5 files You can use any of the filtering options with the tree command (e.g. diff --git a/docs/README.md b/docs/README.md index a82f8529e..64bab4e37 100644 --- a/docs/README.md +++ b/docs/README.md @@ -5,7 +5,7 @@ rclone. See the `content` directory for the docs in markdown format. -Note that some of the docs are auto generated - these should have a DO +Note that some of the docs are auto-generated - these should have a DO NOT EDIT marker near the top. Use [hugo](https://github.com/spf13/hugo) to build the website. @@ -28,7 +28,7 @@ so it is easy to tweak stuff. ├── config.json - hugo config file ├── content - docs and backend docs │   ├── _index.md - the front page of rclone.org -│   ├── commands - auto generated command docs - DO NOT EDIT +│   ├── commands - auto-generated command docs - DO NOT EDIT ├── i18n │   └── en.toml - hugo multilingual config ├── layouts - how the markdown gets converted into HTML diff --git a/docs/content/_index.md b/docs/content/_index.md index 641157ec8..d97310795 100644 --- a/docs/content/_index.md +++ b/docs/content/_index.md @@ -19,8 +19,8 @@ notoc: true ## About rclone {#about} -Rclone is a command line program to manage files on cloud storage. It -is a feature rich alternative to cloud vendors' web storage +Rclone is a command-line program to manage files on cloud storage. It +is a feature-rich alternative to cloud vendors' web storage interfaces. [Over 40 cloud storage products](#providers) support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. @@ -43,7 +43,7 @@ bandwidth use and transfers from one provider to another without using local disk. Virtual backends wrap local and cloud file systems to apply -[encryption](/crypt/), +[encryption](/crypt/), [compression](/compress/), [chunking](/chunker/), [hashing](/hasher/) and @@ -58,13 +58,13 @@ macOS, linux and FreeBSD, and also serves these over [FTP](/commands/rclone_serve_ftp/) and [DLNA](/commands/rclone_serve_dlna/). -Rclone is mature, open source software originally inspired by rsync +Rclone is mature, open-source software originally inspired by rsync and written in [Go](https://golang.org). The friendly support -community are familiar with varied use cases. Official Ubuntu, Debian, +community is familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version [downloading from rclone.org](/downloads/) is recommended. -Rclone is widely used on Linux, Windows and Mac. Third party +Rclone is widely used on Linux, Windows and Mac. Third-party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API. @@ -77,7 +77,7 @@ Rclone helps you: - Backup (and encrypt) files to cloud storage - Restore (and decrypt) files from cloud storage - Mirror cloud data to other cloud services or locally -- Migrate data to cloud, or between cloud storage vendors +- Migrate data to the cloud, or between cloud storage vendors - Mount multiple, encrypted, cached or diverse cloud storage as a disk - Analyse and account for data held on cloud storage using [lsf](/commands/rclone_lsf/), [ljson](/commands/rclone_lsjson/), [size](/commands/rclone_size/), [ncdu](/commands/rclone_ncdu/) - [Union](/union/) file systems together to present multiple local and/or cloud file systems as one diff --git a/docs/content/amazonclouddrive.md b/docs/content/amazonclouddrive.md index 7d71d8656..766772820 100644 --- a/docs/content/amazonclouddrive.md +++ b/docs/content/amazonclouddrive.md @@ -36,7 +36,7 @@ which pass through it. Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own `client_id` and -`client_secret` with Amazon Drive, or use a third party oauth proxy +`client_secret` with Amazon Drive, or use a third-party oauth proxy in which case you will need to enter `client_id`, `client_secret`, `auth_url` and `token_url`. @@ -148,7 +148,7 @@ as they can't be used in JSON strings. Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via -the Amazon Drive website. As of November 17, 2016, files are +the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days. ### Using with non `.com` Amazon accounts diff --git a/docs/content/bugs.md b/docs/content/bugs.md index 9d4aa6460..061c277bf 100644 --- a/docs/content/bugs.md +++ b/docs/content/bugs.md @@ -22,11 +22,11 @@ Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket. -### Bucket based remotes and folders +### Bucket-based remotes and folders -Bucket based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of +Bucket-based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which -means that empty directories on a bucket based remote will tend to +means that empty directories on a bucket-based remote will tend to disappear. Some software creates empty keys ending in `/` as directory markers. diff --git a/docs/content/changelog.md b/docs/content/changelog.md index d3ea37397..567345bb6 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -376,7 +376,7 @@ description: "Rclone Changelog" * New Features * [Connection strings](/docs/#connection-strings) * Config parameters can now be passed as part of the remote name as a connection string. - * For example to do the equivalent of `--drive-shared-with-me` use `drive,shared_with_me:` + * For example, to do the equivalent of `--drive-shared-with-me` use `drive,shared_with_me:` * Make sure we don't save on the fly remote config to the config file (Nick Craig-Wood) * Make sure backends with additional config have a different name for caching (Nick Craig-Wood) * This work was sponsored by CERN, through the [CS3MESH4EOSC Project](https://cs3mesh4eosc.eu/). @@ -629,7 +629,7 @@ description: "Rclone Changelog" * And thanks to these people for many doc fixes too numerous to list * Ameer Dawood, Antoine GIRARD, Bob Bagwill, Christopher Stewart * CokeMine, David, Dov Murik, Durval Menezes, Evan Harris, gtorelly - * Ilyess Bachiri, Janne Johansson, Kerry Su, Marcin Zelent, + * Ilyess Bachiri, Janne Johansson, Kerry Su, Marcin Zelent, * Martin Michlmayr, Milly, Sơn Trần-Nguyễn * Mount * Update systemd status with cache stats (Hekmon) @@ -1174,7 +1174,7 @@ all the docs and Edward Barker for helping re-write the front page. * [Union](/union/) re-write to have multiple writable remotes (Max Sum) * [Seafile](/seafile) for Seafile server (Fred @creativeprojects) * New commands - * backend: command for backend specific commands (see backends) (Nick Craig-Wood) + * backend: command for backend-specific commands (see backends) (Nick Craig-Wood) * cachestats: Deprecate in favour of `rclone backend stats cache:` (Nick Craig-Wood) * dbhashsum: Deprecate in favour of `rclone hashsum DropboxHash` (Nick Craig-Wood) * New Features @@ -1211,7 +1211,7 @@ all the docs and Edward Barker for helping re-write the front page. * lsjson: Add `--hash-type` parameter and use it in lsf to speed up hashing (Nick Craig-Wood) * rc * Add `-o`/`--opt` and `-a`/`--arg` for more structured input (Nick Craig-Wood) - * Implement `backend/command` for running backend specific commands remotely (Nick Craig-Wood) + * Implement `backend/command` for running backend-specific commands remotely (Nick Craig-Wood) * Add `mount/mount` command for starting `rclone mount` via the API (Chaitanya) * rcd: Add Prometheus metrics support (Gary Kim) * serve http @@ -1638,7 +1638,7 @@ all the docs and Edward Barker for helping re-write the front page. * Add flag `--vfs-case-insensitive` for windows/macOS mounts (Ivan Andreev) * Make objects of unknown size readable through the VFS (Nick Craig-Wood) * Move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush() (Brett Dutro) - * Stop empty dirs disappearing when renamed on bucket based remotes (Nick Craig-Wood) + * Stop empty dirs disappearing when renamed on bucket-based remotes (Nick Craig-Wood) * Stop change notify polling clearing so much of the directory cache (Nick Craig-Wood) * Azure Blob * Disable logging to the Windows event log (Nick Craig-Wood) @@ -1791,7 +1791,7 @@ all the docs and Edward Barker for helping re-write the front page. * rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood) * Mount * Default `--daemon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood) - * Update docs to show mounting from root OK for bucket based (Nick Craig-Wood) + * Update docs to show mounting from root OK for bucket-based (Nick Craig-Wood) * Remove nonseekable flag from write files (Nick Craig-Wood) * VFS * Make write without cache more efficient (Nick Craig-Wood) @@ -1858,7 +1858,7 @@ all the docs and Edward Barker for helping re-write the front page. * controlled with `--multi-thread-cutoff` and `--multi-thread-streams` * Use rclone.conf from rclone executable directory to enable portable use (albertony) * Allow sync of a file and a directory with the same name (forgems) - * this is common on bucket based remotes, e.g. s3, gcs + * this is common on bucket-based remotes, e.g. s3, gcs * Add `--ignore-case-sync` for forced case insensitivity (garry415) * Implement `--stats-one-line-date` and `--stats-one-line-date-format` (Peter Berbec) * Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood) @@ -1872,7 +1872,7 @@ all the docs and Edward Barker for helping re-write the front page. * lsjson * Added EncryptedPath to output (calisro) * Support showing the Tier of the object (Nick Craig-Wood) - * Add IsBucket field for bucket based remote listing of the root (Nick Craig-Wood) + * Add IsBucket field for bucket-based remote listing of the root (Nick Craig-Wood) * rc * Add `--loopback` flag to run commands directly without a server (Nick Craig-Wood) * Add operations/fsinfo: Return information about the remote (Nick Craig-Wood) @@ -1888,7 +1888,7 @@ all the docs and Edward Barker for helping re-write the front page. * Make move and copy individual files obey `--backup-dir` (Nick Craig-Wood) * If `--ignore-checksum` is in effect, don't calculate checksum (Nick Craig-Wood) * moveto: Fix case-insensitive same remote move (Gary Kim) - * rc: Fix serving bucket based objects with `--rc-serve` (Nick Craig-Wood) + * rc: Fix serving bucket-based objects with `--rc-serve` (Nick Craig-Wood) * serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim) * Mount * Fix poll interval documentation (Animosity022) @@ -2573,7 +2573,7 @@ Point release to fix hubic and azureblob backends. * Always forget parent dir for notifications * Integrate with Plex websocket * Add rc cache/stats (seuffert) - * Add info log on notification + * Add info log on notification * Box * Fix failure reading large directories - parse file/directory size as float * Dropbox @@ -2754,7 +2754,7 @@ Point release to fix hubic and azureblob backends. * Fix following of symlinks * Fix reading config file outside of Fs setup * Fix reading $USER in username fallback not $HOME - * Fix running under crontab - Use correct OS way of reading username + * Fix running under crontab - Use correct OS way of reading username * Swift * Fix refresh of authentication token * in v1.39 a bug was introduced which ignored new tokens - this fixes it @@ -2917,7 +2917,7 @@ Point release to fix hubic and azureblob backends. * HTTP - thanks to Vasiliy Tolstov * New commands * rclone ncdu - for exploring a remote with a text based user interface. - * rclone lsjson - for listing with a machine readable output + * rclone lsjson - for listing with a machine-readable output * rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox) * New Features * Implement --fast-list flag @@ -3181,7 +3181,7 @@ Point release to fix hubic and azureblob backends. * Unix: implement `-x`/`--one-file-system` to stay on a single file system * thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana * Windows: ignore the symlink bit on files - * Windows: Ignore directory based junction points + * Windows: Ignore directory-based junction points * B2 * Make sure each upload has at least one upload slot - fixes strange upload stats * Fix uploads when using crypt @@ -3284,7 +3284,7 @@ Point release to fix hubic and azureblob backends. * Retry more errors * Add --ignore-size flag - for uploading images to onedrive * Log -v output to stdout by default - * Display the transfer stats in more human readable form + * Display the transfer stats in more human-readable form * Make 0 size files specifiable with `--max-size 0b` * Add `b` suffix so we can specify bytes in --bwlimit, --min-size, etc. * Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz diff --git a/docs/content/chunker.md b/docs/content/chunker.md index dafd78d78..53a356da9 100644 --- a/docs/content/chunker.md +++ b/docs/content/chunker.md @@ -18,7 +18,7 @@ a remote. First check your chosen remote is working - we'll call it `remote:path` here. Note that anything inside `remote:path` will be chunked and anything outside -won't. This means that if you are using a bucket based remote (e.g. S3, B2, swift) +won't. This means that if you are using a bucket-based remote (e.g. S3, B2, swift) then you should probably put the bucket in the remote `s3:bucket`. Now configure `chunker` using `rclone config`. We will call this one `overlay` diff --git a/docs/content/crypt.md b/docs/content/crypt.md index a7a43fe96..0d22100b7 100644 --- a/docs/content/crypt.md +++ b/docs/content/crypt.md @@ -224,7 +224,7 @@ it when needed. If you intend to use the wrapped remote both directly for keeping unencrypted content, as well as through a crypt remote for encrypted content, it is recommended to point the crypt remote to a separate -directory within the wrapped remote. If you use a bucket based storage +directory within the wrapped remote. If you use a bucket-based storage system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is generally advisable to wrap the crypt remote around a specific bucket (`s3:bucket`). If wrapping around the entire root of the storage (`s3:`), and use the diff --git a/docs/content/docs.md b/docs/content/docs.md index b508b008c..0290de904 100644 --- a/docs/content/docs.md +++ b/docs/content/docs.md @@ -278,7 +278,7 @@ This will make `parameter` be `with"quote` and `parameter2` be `with'quote`. If you leave off the `=parameter` then rclone will substitute `=true` -which works very well with flags. For example to use s3 configured in +which works very well with flags. For example, to use s3 configured in the environment you could use: rclone lsd :s3,env_auth: @@ -485,7 +485,7 @@ it will give an error. This option controls the bandwidth limit. For example --bwlimit 10M - + would mean limit the upload and download bandwidth to 10 MiB/s. **NB** this is **bytes** per second not **bits** per second. To use a single limit, specify the desired bandwidth in KiB/s, or use a @@ -664,12 +664,12 @@ they are incorrect as it would normally. ### --compare-dest=DIR ### -When using `sync`, `copy` or `move` DIR is checked in addition to the -destination for files. If a file identical to the source is found that -file is NOT copied from source. This is useful to copy just files that +When using `sync`, `copy` or `move` DIR is checked in addition to the +destination for files. If a file identical to the source is found that +file is NOT copied from source. This is useful to copy just files that have changed since the last backup. -You must use the same remote as the destination of the sync. The +You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See `--copy-dest` and `--backup-dir`. @@ -772,9 +772,9 @@ connection to go through to a remote object storage system. It is ### --copy-dest=DIR ### -When using `sync`, `copy` or `move` DIR is checked in addition to the -destination for files. If a file identical to the source is found that -file is server-side copied from DIR to the destination. This is useful +When using `sync`, `copy` or `move` DIR is checked in addition to the +destination for files. If a file identical to the source is found that +file is server-side copied from DIR to the destination. This is useful for incremental backup. The remote in use must support server-side copy and you must @@ -951,7 +951,7 @@ default, and responds to key `u` for toggling human-readable format. ### --ignore-case-sync ### -Using this option will cause rclone to ignore the case of the files +Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different. @@ -1097,7 +1097,7 @@ warnings and significant events. ### --use-json-log ### -This switches the log format to JSON for rclone. The fields of json log +This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time. ### --low-level-retries NUMBER ### @@ -1479,7 +1479,7 @@ Disable retries with `--retries 1`. ### --retries-sleep=TIME ### -This sets the interval between each retry specified by `--retries` +This sets the interval between each retry specified by `--retries` The default is `0`. Use `0` to disable. @@ -1516,9 +1516,9 @@ Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately. ### --stats-file-name-length integer ### -By default, the `--stats` output will truncate file names and paths longer -than 40 characters. This is equivalent to providing -`--stats-file-name-length 40`. Use `--stats-file-name-length 0` to disable +By default, the `--stats` output will truncate file names and paths longer +than 40 characters. This is equivalent to providing +`--stats-file-name-length 40`. Use `--stats-file-name-length 0` to disable any truncation of file names printed by stats. ### --stats-log-level string ### @@ -1562,14 +1562,14 @@ The default is `bytes`. ### --suffix=SUFFIX ### When using `sync`, `copy` or `move` any files which would have been -overwritten or deleted will have the suffix added to them. If there -is a file with the same path (after the suffix has been added), then +overwritten or deleted will have the suffix added to them. If there +is a file with the same path (after the suffix has been added), then it will be overwritten. The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. -This is for use with files to add the suffix in the current directory +This is for use with files to add the suffix in the current directory or with `--backup-dir`. See `--backup-dir` for more info. For example @@ -1633,7 +1633,7 @@ will depend on the backend. For HTTP based backends it is an HTTP PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip transaction over TCP. -For example to limit rclone to 10 transactions per second use +For example, to limit rclone to 10 transactions per second use `--tpslimit 10`, or to 1 transaction every 2 seconds use `--tpslimit 0.5`. @@ -1749,7 +1749,7 @@ quickly using the least amount of memory. However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to -be the bucket based remotes (e.g. S3, B2, GCS, Swift, Hubic). +be the bucket-based remotes (e.g. S3, B2, GCS, Swift, Hubic). If you use the `--fast-list` flag then rclone will use this method for listing directories. This will have the following consequences for @@ -1898,8 +1898,8 @@ This option defaults to `false`. Configuration Encryption ------------------------ -Your configuration file contains information for logging in to -your cloud services. This means that you should keep your +Your configuration file contains information for logging in to +your cloud services. This means that you should keep your `rclone.conf` file in a secure location. If you are in an environment where that isn't possible, you can @@ -1947,8 +1947,8 @@ encryption from your configuration. There is no way to recover the configuration if you lose your password. -rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) -which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate +rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) +which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored. @@ -2000,8 +2000,8 @@ script method of supplying the password enhances the security of the config password considerably. If you are running rclone inside a script, unless you are using the -`--password-command` method, you might want to disable -password prompts. To do that, pass the parameter +`--password-command` method, you might want to disable +password prompts. To do that, pass the parameter `--ask-password=false` to rclone. This will make rclone fail instead of asking for a password if `RCLONE_CONFIG_PASS` doesn't contain a valid password, and `--password-command` has not been supplied. @@ -2039,9 +2039,9 @@ Write CPU profile to file. This can be analysed with `go tool pprof`. The `--dump` flag takes a comma separated list of flags to dump info about. -Note that some headers including `Accept-Encoding` as shown may not +Note that some headers including `Accept-Encoding` as shown may not be correct in the request and the response may not show `Content-Encoding` -if the go standard libraries auto gzip encoding was in effect. In this case +if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it. The available flags are: @@ -2279,7 +2279,7 @@ this order and the first one with a value is used. - Parameters in connection strings, e.g. `myRemote,skip_links:` - Flag values as supplied on the command line, e.g. `--skip-links` - Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_SKIP_LINKS` (see above). -- Backend specific environment vars, e.g. `RCLONE_LOCAL_SKIP_LINKS`. +- Backend-specific environment vars, e.g. `RCLONE_LOCAL_SKIP_LINKS`. - Backend generic environment vars, e.g. `RCLONE_SKIP_LINKS`. - Config file, e.g. `skip_links = true`. - Default values, e.g. `false` - these can't be changed. diff --git a/docs/content/donate.md b/docs/content/donate.md index f3044ab45..59fd6a977 100644 --- a/docs/content/donate.md +++ b/docs/content/donate.md @@ -6,7 +6,7 @@ type: page # {{< icon "fa fa-heart heart" >}} Donations to the rclone project -Rclone is a free open source project with thousands of contributions +Rclone is a free open-source project with thousands of contributions from volunteers all round the world and I would like to thank all of you for donating your time to the project. diff --git a/docs/content/faq.md b/docs/content/faq.md index 497fa4f0e..f6f31a1b6 100644 --- a/docs/content/faq.md +++ b/docs/content/faq.md @@ -190,7 +190,7 @@ issues with DNS resolution. See the [name resolution section in the go docs](htt ### The total size reported in the stats for a sync is wrong and keeps changing It is likely you have more than 10,000 files that need to be -synced. By default rclone only gets 10,000 files ahead in a sync so as +synced. By default, rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the [--max-backlog](/docs/#max-backlog-n) flag. diff --git a/docs/content/filtering.md b/docs/content/filtering.md index 59f9b3d41..cc6f2e426 100644 --- a/docs/content/filtering.md +++ b/docs/content/filtering.md @@ -386,7 +386,7 @@ statement. For more flexibility use the `--filter-from` flag. ### `--filter` - Add a file-filtering rule Specifies path/file names to an rclone command, based on a single -include or exclude rule, in `+` or `-` format. +include or exclude rule, in `+` or `-` format. This flag can be repeated. See above for the order filter flags are processed in. @@ -555,7 +555,7 @@ input to `--files-from-raw`. ### `--ignore-case` - make searches case insensitive -By default rclone filter patterns are case sensitive. The `--ignore-case` +By default, rclone filter patterns are case sensitive. The `--ignore-case` flag makes all of the filters patterns on the command line case insensitive. diff --git a/docs/content/gui.md b/docs/content/gui.md index 01181e318..32236bc11 100644 --- a/docs/content/gui.md +++ b/docs/content/gui.md @@ -17,7 +17,7 @@ rclone rcd --rc-web-gui ``` This will produce logs like this and rclone needs to continue to run to serve the GUI: - + ``` 2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip 2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip] @@ -28,12 +28,12 @@ This will produce logs like this and rclone needs to continue to run to serve th This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details. -If you wish to check for updates then you can add `--rc-web-gui-update` +If you wish to check for updates then you can add `--rc-web-gui-update` to the command line. If you find your GUI broken, you may force it to update by add `--rc-web-gui-force-update`. -By default, rclone will open your browser. Add `--rc-web-gui-no-open-browser` +By default, rclone will open your browser. Add `--rc-web-gui-no-open-browser` to disable this feature. ## Using the GUI @@ -55,7 +55,7 @@ On the left hand side you will see a series of view buttons you can click on: When you run the `rclone rcd --rc-web-gui` this is what happens - Rclone starts but only runs the remote control API ("rc"). -- The API is bound to localhost with an auto generated username and password. +- The API is bound to localhost with an auto-generated username and password. - If the API bundle is missing then rclone will download it. - rclone will start serving the files from the API bundle over the same port as the API - rclone will open the browser with a `login_token` so it can log straight in. diff --git a/docs/content/install.md b/docs/content/install.md index bfffe49c1..ac82708fe 100644 --- a/docs/content/install.md +++ b/docs/content/install.md @@ -48,12 +48,12 @@ Copy binary file sudo cp rclone /usr/bin/ sudo chown root:root /usr/bin/rclone sudo chmod 755 /usr/bin/rclone - + Install manpage sudo mkdir -p /usr/local/share/man/man1 sudo cp rclone.1 /usr/local/share/man/man1/ - sudo mandb + sudo mandb Run `rclone config` to setup. See [rclone config docs](/docs/) for more details. @@ -229,7 +229,7 @@ Instructions 1. `git clone https://github.com/stefangweichinger/ansible-rclone.git` into your local roles-directory 2. add the role to the hosts you want rclone installed to: - + ``` - hosts: rclone-hosts roles: @@ -346,7 +346,7 @@ your rclone command, as an alternative to scheduled task configured to run at st ##### Mount command built-in service integration #### -For mount commands, Rclone has a built-in Windows service integration via the third party +For mount commands, Rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command `New-Service` (requires administrative privileges). @@ -366,9 +366,9 @@ Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described [here](https://github.com/rclone/rclone/issues/3340). -##### Third party service integration #### +##### Third-party service integration ##### -To Windows service running any rclone command, the excellent third party utility +To Windows service running any rclone command, the excellent third-party utility [NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index bcf0925df..02c506fc4 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -107,7 +107,7 @@ Choose a number from below, or type in an existing value 1 > Archive 2 > Links 3 > Sync - + Mountpoints> 1 -------------------- [jotta] @@ -200,7 +200,7 @@ as they can't be used in XML strings. ### Deleting files -By default rclone will send all files to the trash when deleting files. They will be permanently +By default, rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. Emptying the trash is supported by the [cleanup](/commands/rclone_cleanup/) command. diff --git a/docs/content/memory.md b/docs/content/memory.md index 6970e45ce..599a1975c 100644 --- a/docs/content/memory.md +++ b/docs/content/memory.md @@ -8,7 +8,7 @@ description: "Rclone docs for Memory backend" The memory backend is an in RAM backend. It does not persist its data - use the local backend for that. -The memory backend behaves like a bucket based remote (e.g. like +The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the `:memory:` remote name. diff --git a/docs/content/overview.md b/docs/content/overview.md index 86ac7fcbe..7a1de1b09 100644 --- a/docs/content/overview.md +++ b/docs/content/overview.md @@ -406,7 +406,7 @@ remote itself may assign the MIME type. ## Optional Features ## All rclone remotes support a base command set. Other features depend -upon backend specific capabilities. +upon backend-specific capabilities. | Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir | | ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:|:--------:| @@ -428,7 +428,7 @@ upon backend specific capabilities. | Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | | Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Mega | Yes | No | Yes | Yes | Yes | No | No | Yes | Yes | Yes | -| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | +| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | | Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | | Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | @@ -529,4 +529,4 @@ See [rclone about command](https://rclone.org/commands/rclone_about/) ### EmptyDir ### The remote supports empty directories. See [Limitations](/bugs/#limitations) - for details. Most Object/Bucket based remotes do not support this. + for details. Most Object/Bucket-based remotes do not support this. diff --git a/docs/content/privacy.md b/docs/content/privacy.md index c76103ce9..8efdfac70 100644 --- a/docs/content/privacy.md +++ b/docs/content/privacy.md @@ -55,7 +55,7 @@ This website may use social sharing buttons which help share web content directl ## Use of Cloud API User Data ## -Rclone is a command line program to manage files on cloud storage. Its sole purpose is to access and manipulate user content in the [supported](/overview/) cloud storage systems from a local machine of the end user. For accessing the user content via the cloud provider API, Rclone uses authentication mechanisms, such as OAuth or HTTP Cookies, depending on the particular cloud provider offerings. Use of these authentication mechanisms and user data is governed by the privacy policies mentioned in the [Resources & Further Information](/privacy/#resources-further-information) section and followed by the privacy policy of Rclone. +Rclone is a command-line program to manage files on cloud storage. Its sole purpose is to access and manipulate user content in the [supported](/overview/) cloud storage systems from a local machine of the end user. For accessing the user content via the cloud provider API, Rclone uses authentication mechanisms, such as OAuth or HTTP Cookies, depending on the particular cloud provider offerings. Use of these authentication mechanisms and user data is governed by the privacy policies mentioned in the [Resources & Further Information](/privacy/#resources-further-information) section and followed by the privacy policy of Rclone. * Rclone provides the end user with access to their files available in a storage system associated by the authentication credentials via the publicly exposed API of the storage system. * Rclone allows storing the authentication credentials on the user machine in the local configuration file. diff --git a/docs/content/rc.md b/docs/content/rc.md index 23226a220..989716831 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -1632,7 +1632,7 @@ parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using `curl`. The response will be a JSON blob in the body of the response. This is -formatted to be reasonably human readable. +formatted to be reasonably human-readable. ### Error returns diff --git a/docs/content/s3.md b/docs/content/s3.md index aad4b7a68..15791ba86 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -151,7 +151,7 @@ Choose a number from below, or type in your own value region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. -endpoint> +endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia, or Pacific Northwest. @@ -239,16 +239,16 @@ env_auth = false access_key_id = XXX secret_access_key = YYY region = us-east-1 -endpoint = -location_constraint = +endpoint = +location_constraint = acl = private -server_side_encryption = -storage_class = +server_side_encryption = +storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote -y/e/d> +y/e/d> ``` ### Modified time @@ -268,7 +268,7 @@ request as the metadata isn't returned in object listings. #### Avoiding HEAD requests to read the modification time -By default rclone will use the modification time of objects stored in +By default, rclone will use the modification time of objects stored in S3 for syncing. This is stored in object metadata which unfortunately takes an extra HEAD request to read which can be expensive (in time and money). @@ -347,7 +347,7 @@ Note that `--fast-list` isn't required in the top-up sync. #### Avoiding HEAD requests after PUT -By default rclone will HEAD every object it uploads. It does this to +By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly. You can disable this with the [--s3-no-head](#s3-no-head) option - see @@ -513,7 +513,7 @@ Example policy: "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::*" - } + } ] } ``` @@ -1940,14 +1940,14 @@ up looking like this: type = s3 provider = AWS env_auth = false -access_key_id = -secret_access_key = +access_key_id = +secret_access_key = region = us-east-1 -endpoint = -location_constraint = +endpoint = +location_constraint = acl = private -server_side_encryption = -storage_class = +server_side_encryption = +storage_class = ``` Then use it as normal with the name of the public bucket, e.g. @@ -1983,7 +1983,7 @@ upload_cutoff = 0 ### Ceph -[Ceph](https://ceph.com/) is an open source unified, distributed +[Ceph](https://ceph.com/) is an open-source, unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface. @@ -2340,7 +2340,7 @@ location_constraint = server_side_encryption = ``` -So once set up, for example to copy files into a bucket +So once set up, for example, to copy files into a bucket ``` rclone copy /path/to/files minio:bucket diff --git a/fs/accounting/token_bucket.go b/fs/accounting/token_bucket.go index 60c3df4d8..5d7c87fba 100644 --- a/fs/accounting/token_bucket.go +++ b/fs/accounting/token_bucket.go @@ -281,7 +281,7 @@ If the rate parameter is not supplied then the bandwidth is queried The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified. -In either case "rate" is returned as a human readable string, and +In either case "rate" is returned as a human-readable string, and "bytesPerSecond" is returned as a number. `, }) diff --git a/fs/cache/cache_test.go b/fs/cache/cache_test.go index 85d801ad9..c7620e25c 100644 --- a/fs/cache/cache_test.go +++ b/fs/cache/cache_test.go @@ -154,7 +154,7 @@ func TestPin(t *testing.T) { cleanup, create := mockNewFs(t) defer cleanup() - // Test pinning and unpinning non existent + // Test pinning and unpinning non-existent f := mockfs.NewFs(context.Background(), "mock", "/alien") Pin(f) Unpin(f) diff --git a/fs/open_options.go b/fs/open_options.go index dbf857567..e39a23fb7 100644 --- a/fs/open_options.go +++ b/fs/open_options.go @@ -99,7 +99,7 @@ func ParseRangeOption(s string) (po *RangeOption, err error) { return &o, nil } -// String formats the option into human readable form +// String formats the option into human-readable form func (o *RangeOption) String() string { return fmt.Sprintf("RangeOption(%d,%d)", o.Start, o.End) } @@ -178,7 +178,7 @@ func (o *SeekOption) Header() (key string, value string) { return key, value } -// String formats the option into human readable form +// String formats the option into human-readable form func (o *SeekOption) String() string { return fmt.Sprintf("SeekOption(%d)", o.Offset) } @@ -199,7 +199,7 @@ func (o *HTTPOption) Header() (key string, value string) { return o.Key, o.Value } -// String formats the option into human readable form +// String formats the option into human-readable form func (o *HTTPOption) String() string { return fmt.Sprintf("HTTPOption(%q,%q)", o.Key, o.Value) } @@ -220,7 +220,7 @@ func (o *HashesOption) Header() (key string, value string) { return "", "" } -// String formats the option into human readable form +// String formats the option into human-readable form func (o *HashesOption) String() string { return fmt.Sprintf("HashesOption(%v)", o.Hashes) } @@ -239,7 +239,7 @@ func (o NullOption) Header() (key string, value string) { return "", "" } -// String formats the option into human readable form +// String formats the option into human-readable form func (o NullOption) String() string { return fmt.Sprintf("NullOption()") } diff --git a/fs/operations/lsjson.go b/fs/operations/lsjson.go index 3a311d1f3..c4ab976c9 100644 --- a/fs/operations/lsjson.go +++ b/fs/operations/lsjson.go @@ -131,7 +131,7 @@ func newListJSON(ctx context.Context, fsrc fs.Fs, remote string, opt *ListJSONOp features := fsrc.Features() lj.canGetTier = features.GetTier lj.format = formatForPrecision(fsrc.Precision()) - lj.isBucket = features.BucketBased && remote == "" && fsrc.Root() == "" // if bucket based remote listing the root mark directories as buckets + lj.isBucket = features.BucketBased && remote == "" && fsrc.Root() == "" // if bucket-based remote listing the root mark directories as buckets lj.showHash = opt.ShowHash lj.hashTypes = fsrc.Hashes().Array() if len(opt.HashTypes) != 0 { diff --git a/fs/operations/operations.go b/fs/operations/operations.go index c6f9aa355..30a0306e7 100644 --- a/fs/operations/operations.go +++ b/fs/operations/operations.go @@ -943,7 +943,7 @@ func ListLong(ctx context.Context, f fs.Fs, w io.Writer) error { }) } -// hashSum returns the human readable hash for ht passed in. This may +// hashSum returns the human-readable hash for ht passed in. This may // be UNSUPPORTED or ERROR. If it isn't returning a valid hash it will // return an error. func hashSum(ctx context.Context, ht hash.Type, downloadFlag bool, o fs.Object) (string, error) { diff --git a/fs/parseduration.go b/fs/parseduration.go index 7a572ed07..522dbae7a 100644 --- a/fs/parseduration.go +++ b/fs/parseduration.go @@ -119,7 +119,7 @@ func ParseDuration(age string) (time.Duration, error) { return parseDurationFromNow(age, time.Now) } -// ReadableString parses d into a human readable duration. +// ReadableString parses d into a human-readable duration. // Based on https://github.com/hako/durafmt func (d Duration) ReadableString() string { switch d { diff --git a/fs/walk/walk.go b/fs/walk/walk.go index 3afaac195..f3d76f298 100644 --- a/fs/walk/walk.go +++ b/fs/walk/walk.go @@ -124,7 +124,7 @@ func (l ListType) Filter(in *fs.DirEntries) { // If maxLevel is < 0 then it will recurse indefinitely, else it will // only do maxLevel levels. // -// If synthesizeDirs is set then for bucket based remotes it will +// If synthesizeDirs is set then for bucket-based remotes it will // synthesize directories from the file structure. This uses extra // memory so don't set this if you don't need directories, likewise do // set this if you are interested in directories. @@ -182,7 +182,7 @@ func listRwalk(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLe return walkErr } -// dirMap keeps track of directories made for bucket based remotes. +// dirMap keeps track of directories made for bucket-based remotes. // true => directory has been sent // false => directory has been seen but not sent type dirMap struct { diff --git a/fs/walk/walk_test.go b/fs/walk/walk_test.go index b670ef2cb..c8da2e7a3 100644 --- a/fs/walk/walk_test.go +++ b/fs/walk/walk_test.go @@ -783,7 +783,7 @@ func TestListR(t *testing.T) { require.NoError(t, err) require.Equal(t, []string{"dir/b"}, got) - // Now bucket based + // Now bucket-based objects = fs.DirEntries{ mockobject.Object("a"), mockobject.Object("b"), diff --git a/fstest/fstests/fstests.go b/fstest/fstests/fstests.go index 34abfd815..2fb0ba4f4 100644 --- a/fstest/fstests/fstests.go +++ b/fstest/fstests/fstests.go @@ -491,14 +491,14 @@ func Run(t *testing.T, opt *Opt) { assert.True(t, len(fsInfo.CommandHelp) > 0, "Command is declared, must return some help in CommandHelp") }) - // TestFsRmdirNotFound tests deleting a non existent directory + // TestFsRmdirNotFound tests deleting a non-existent directory t.Run("FsRmdirNotFound", func(t *testing.T) { skipIfNotOk(t) if isBucketBasedButNotRoot(f) { - t.Skip("Skipping test as non root bucket based remote") + t.Skip("Skipping test as non root bucket-based remote") } err := f.Rmdir(ctx, "") - assert.Error(t, err, "Expecting error on Rmdir non existent") + assert.Error(t, err, "Expecting error on Rmdir non-existent") }) // Make the directory @@ -1258,7 +1258,7 @@ func Run(t *testing.T, opt *Opt) { t.Run("FsRmdirFull", func(t *testing.T) { skipIfNotOk(t) if isBucketBasedButNotRoot(f) { - t.Skip("Skipping test as non root bucket based remote") + t.Skip("Skipping test as non root bucket-based remote") } err := f.Rmdir(ctx, "") require.Error(t, err, "Expecting error on RMdir on non empty remote") @@ -1959,7 +1959,7 @@ func Run(t *testing.T, opt *Opt) { purged = true fstest.CheckListing(t, f, []fstest.Item{}) - // Check purging again if not bucket based + // Check purging again if not bucket-based if !isBucketBasedButNotRoot(f) { err = operations.Purge(ctx, f, "") assert.Error(t, err, "Expecting error after on second purge") diff --git a/fstest/test_all/config.go b/fstest/test_all/config.go index e5a012488..31a55a2a5 100644 --- a/fstest/test_all/config.go +++ b/fstest/test_all/config.go @@ -25,7 +25,7 @@ type Test struct { // Backend describes a backend test // -// FIXME make bucket based remotes set sub-dir automatically??? +// FIXME make bucket-based remotes set sub-dir automatically??? type Backend struct { Backend string // name of the backend directory Remote string // name of the test remote diff --git a/fstest/test_all/report.go b/fstest/test_all/report.go index bcc1b5658..cbc48f5cb 100644 --- a/fstest/test_all/report.go +++ b/fstest/test_all/report.go @@ -123,7 +123,7 @@ func (r *Report) RecordResult(t *Run) { } } -// Title returns a human readable summary title for the Report +// Title returns a human-readable summary title for the Report func (r *Report) Title() string { if r.AllPassed() { return fmt.Sprintf("PASS: All tests finished OK in %v", r.Duration) diff --git a/lib/bucket/bucket.go b/lib/bucket/bucket.go index a9c63b83d..f77887129 100644 --- a/lib/bucket/bucket.go +++ b/lib/bucket/bucket.go @@ -1,4 +1,4 @@ -// Package bucket is contains utilities for managing bucket based backends +// Package bucket is contains utilities for managing bucket-based backends package bucket import ( diff --git a/lib/cache/cache_test.go b/lib/cache/cache_test.go index 7a2366963..089db07ef 100644 --- a/lib/cache/cache_test.go +++ b/lib/cache/cache_test.go @@ -158,7 +158,7 @@ func TestCachePin(t *testing.T) { _, err := c.Get("/", create) require.NoError(t, err) - // Pin a non existent item to show nothing happens + // Pin a non-existent item to show nothing happens c.Pin("notfound") c.mu.Lock() @@ -312,7 +312,7 @@ func TestCacheRename(t *testing.T) { assert.Equal(t, 2, c.Entries()) - // rename to non existent + // rename to non-existent value, found := c.Rename("existing1", "EXISTING1") assert.Equal(t, true, found) assert.Equal(t, existing1, value) @@ -326,7 +326,7 @@ func TestCacheRename(t *testing.T) { assert.Equal(t, 1, c.Entries()) - // rename non existent + // rename non-existent value, found = c.Rename("notfound", "NOTFOUND") assert.Equal(t, false, found) assert.Nil(t, value) diff --git a/lib/structs/structs.go b/lib/structs/structs.go index 9c6ee05b5..4bbc1a738 100644 --- a/lib/structs/structs.go +++ b/lib/structs/structs.go @@ -15,7 +15,7 @@ import ( // and b will be set. // // This is useful for copying between almost identical structures that -// are frequently present in auto generated code for cloud storage +// are frequently present in auto-generated code for cloud storage // interfaces. func SetFrom(a, b interface{}) { ta := reflect.TypeOf(a).Elem() diff --git a/vfs/dir_test.go b/vfs/dir_test.go index 93c35c00d..95e7bfd45 100644 --- a/vfs/dir_test.go +++ b/vfs/dir_test.go @@ -585,7 +585,7 @@ func TestDirRename(t *testing.T) { "renamed empty directory,0,true", }) // ...we don't check the underlying f.Fremote because on - // bucket based remotes the directory won't be there + // bucket-based remotes the directory won't be there // read only check vfs.Opt.ReadOnly = true diff --git a/vfs/vfscache/cache_test.go b/vfs/vfscache/cache_test.go index 761e22a98..8df997d14 100644 --- a/vfs/vfscache/cache_test.go +++ b/vfs/vfscache/cache_test.go @@ -603,7 +603,7 @@ func TestCacheRename(t *testing.T) { assertPathNotExist(t, osPathMeta) assert.False(t, c.Exists("sub/newPotato")) - // non existent file - is ignored + // non-existent file - is ignored assert.NoError(t, c.Rename("nonexist", "nonexist2", nil)) }