docs: improve grammar and fix typos (#5361)

This alters some comments in source files, but is interested mainly in documentation files and help messages.
This commit is contained in:
Atílio Antônio 2021-11-04 08:50:43 -03:00 committed by GitHub
parent 454574e2cc
commit c08d48a50d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
59 changed files with 179 additions and 179 deletions

View File

@ -9,7 +9,7 @@ We understand you are having a problem with rclone; we want to help you with tha
**STOP and READ** **STOP and READ**
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**: **YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
Please show the effort you've put in to solving the problem and please be specific. Please show the effort you've put into solving the problem and please be specific.
People are volunteering their time to help! Low effort posts are not likely to get good answers! People are volunteering their time to help! Low effort posts are not likely to get good answers!
If you think you might have found a bug, try to replicate it with the latest beta (or stable). If you think you might have found a bug, try to replicate it with the latest beta (or stable).

View File

@ -223,7 +223,7 @@ find the results at https://pub.rclone.org/integration-tests/
Rclone code is organised into a small number of top level directories Rclone code is organised into a small number of top level directories
with modules beneath. with modules beneath.
* backend - the rclone backends for interfacing to cloud providers - * backend - the rclone backends for interfacing to cloud providers -
* all - import this to load all the cloud providers * all - import this to load all the cloud providers
* ...providers * ...providers
* bin - scripts for use while building or maintaining rclone * bin - scripts for use while building or maintaining rclone
@ -233,7 +233,7 @@ with modules beneath.
* cmdtest - end-to-end tests of commands, flags, environment variables,... * cmdtest - end-to-end tests of commands, flags, environment variables,...
* docs - the documentation and website * docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated * content - adjust these docs only - everything else is autogenerated
* command - these are auto generated - edit the corresponding .go file * command - these are auto-generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code * fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics * accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead * asyncreader - an io.Reader which reads ahead
@ -299,7 +299,7 @@ the source file in the `Help:` field.
countries, it looks better without an ending period/full stop character. countries, it looks better without an ending period/full stop character.
The only documentation you need to edit are the `docs/content/*.md` The only documentation you need to edit are the `docs/content/*.md`
files. The `MANUAL.*`, `rclone.1`, web site, etc. are all auto generated files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated
from those during the release process. See the `make doc` and `make from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature. don't need to run these when adding a feature.
@ -350,7 +350,7 @@ And here is an example of a longer one:
``` ```
mount: fix hang on errored upload mount: fix hang on errored upload
In certain circumstances if an upload failed then the mount could hang In certain circumstances, if an upload failed then the mount could hang
indefinitely. This was fixed by closing the read pipe after the Put indefinitely. This was fixed by closing the read pipe after the Put
completed. This will cause the write side to return a pipe closed completed. This will cause the write side to return a pipe closed
error fixing the hang. error fixing the hang.
@ -425,8 +425,8 @@ Research
Getting going Getting going
* Create `backend/remote/remote.go` (copy this from a similar remote) * Create `backend/remote/remote.go` (copy this from a similar remote)
* box is a good one to start from if you have a directory based remote * box is a good one to start from if you have a directory-based remote
* b2 is a good one to start from if you have a bucket based remote * b2 is a good one to start from if you have a bucket-based remote
* Add your remote to the imports in `backend/all/all.go` * Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead. * HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable. * Try to implement as many optional methods as possible as it makes the remote more usable.

View File

@ -19,7 +19,7 @@ Current active maintainers of rclone are:
**This is a work in progress Draft** **This is a work in progress Draft**
This is a guide for how to be an rclone maintainer. This is mostly a writeup of what I (@ncw) attempt to do. This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do.
## Triaging Tickets ## ## Triaging Tickets ##
@ -27,15 +27,15 @@ When a ticket comes in it should be triaged. This means it should be classified
Rclone uses the labels like this: Rclone uses the labels like this:
* `bug` - a definite verified bug * `bug` - a definitely verified bug
* `can't reproduce` - a problem which we can't reproduce * `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label * `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
* `duplicate` - normally close these and ask the user to subscribe to the original * `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement: new remote` - a new rclone backend * `enhancement: new remote` - a new rclone backend
* `enhancement` - a new feature * `enhancement` - a new feature
* `FUSE` - to do with `rclone mount` command * `FUSE` - to do with `rclone mount` command
* `good first issue` - mark these if you find a small self contained issue - these get shown to new visitors to the project * `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project
* `help` wanted - mark these if you find a self contained issue - these get shown to new visitors to the project * `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project
* `IMPORTANT` - note to maintainers not to forget to fix this for the release * `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `maintenance` - internal enhancement, code re-organisation, etc. * `maintenance` - internal enhancement, code re-organisation, etc.
* `Needs Go 1.XX` - waiting for that version of Go to be released * `Needs Go 1.XX` - waiting for that version of Go to be released
@ -51,7 +51,7 @@ The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release * v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release * v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled to a release * Soon - stuff we think is a good idea - waiting to be scheduled for a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with * Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment * Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
@ -65,7 +65,7 @@ Close tickets as soon as you can - make sure they are tagged with a release. Po
Try to process pull requests promptly! Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message. Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`. After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
@ -81,15 +81,15 @@ Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
High impact regressions should be fixed before the next release. High impact regressions should be fixed before the next release.
Near the start of the release cycle the dependencies should be updated with `make update` to give time for bugs to surface. Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down. Towards the end of the release cycle try not to merge anything too big so let things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained. Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
## Mailing list ## ## Mailing list ##
There is now an invite only mailing list for rclone developers `rclone-dev` on google groups. There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups.
## TODO ## ## TODO ##

View File

@ -2,7 +2,7 @@
[Website](https://rclone.org) | [Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) | [Documentation](https://rclone.org/docs/) |
[Download](https://rclone.org/downloads/) | [Download](https://rclone.org/downloads/) |
[Contributing](CONTRIBUTING.md) | [Contributing](CONTRIBUTING.md) |
[Changelog](https://rclone.org/changelog/) | [Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) | [Installation](https://rclone.org/install/) |
@ -10,12 +10,12 @@
[![Build Status](https://github.com/rclone/rclone/workflows/build/badge.svg)](https://github.com/rclone/rclone/actions?query=workflow%3Abuild) [![Build Status](https://github.com/rclone/rclone/workflows/build/badge.svg)](https://github.com/rclone/rclone/actions?query=workflow%3Abuild)
[![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone) [![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone)
[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone) [![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone)
[![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone) [![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone)
# Rclone # Rclone
Rclone *("rsync for cloud storage")* is a command line program to sync files and directories to and from different cloud storage providers. Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers.
## Storage providers ## Storage providers
@ -72,7 +72,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/) * Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/) * Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/) * The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/) Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
## Features ## Features

View File

@ -99,7 +99,7 @@ func NewNameEncryptionMode(s string) (mode NameEncryptionMode, err error) {
return mode, err return mode, err
} }
// String turns mode into a human readable string // String turns mode into a human-readable string
func (mode NameEncryptionMode) String() (out string) { func (mode NameEncryptionMode) String() (out string) {
switch mode { switch mode {
case NameEncryptionOff: case NameEncryptionOff:

View File

@ -139,7 +139,7 @@ you want to read the media.`,
Default: false, Default: false,
Help: `Also view and download archived media. Help: `Also view and download archived media.
By default rclone does not request archived media. Thus, when syncing, By default, rclone does not request archived media. Thus, when syncing,
archived media is not visible in directory listings or transferred. archived media is not visible in directory listings or transferred.
Note that media in albums is always visible and synced, no matter Note that media in albums is always visible and synced, no matter

View File

@ -49,7 +49,7 @@ Use this to set additional HTTP headers for all transactions.
The input format is comma separated list of key,value pairs. Standard The input format is comma separated list of key,value pairs. Standard
[CSV encoding](https://godoc.org/encoding/csv) may be used. [CSV encoding](https://godoc.org/encoding/csv) may be used.
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'. You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
`, `,

View File

@ -269,7 +269,7 @@ func errorHandler(res *http.Response) (err error) {
} }
serverError.Message = string(data) serverError.Message = string(data)
if serverError.Message == "" || strings.HasPrefix(serverError.Message, "{") { if serverError.Message == "" || strings.HasPrefix(serverError.Message, "{") {
// Replace empty or JSON response with a human readable text. // Replace empty or JSON response with a human-readable text.
serverError.Message = res.Status serverError.Message = res.Status
} }
serverError.Status = res.StatusCode serverError.Status = res.StatusCode

View File

@ -261,7 +261,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// splitNodePath splits nodePath into / separated parts, returning nil if it // splitNodePath splits nodePath into / separated parts, returning nil if it
// should refer to the root. // should refer to the root.
// It also encodes the parts into backend specific encoding // It also encodes the parts into backend-specific encoding
func (f *Fs) splitNodePath(nodePath string) (parts []string) { func (f *Fs) splitNodePath(nodePath string) (parts []string) {
nodePath = path.Clean(nodePath) nodePath = path.Clean(nodePath)
if nodePath == "." || nodePath == "/" { if nodePath == "." || nodePath == "/" {
@ -354,7 +354,7 @@ func (f *Fs) mkdir(ctx context.Context, rootNode *mega.Node, dir string) (node *
} }
} }
if err != nil { if err != nil {
return nil, errors.Wrap(err, "internal error: mkdir called with non existent root node") return nil, errors.Wrap(err, "internal error: mkdir called with non-existent root node")
} }
// i is number of directories to create (may be 0) // i is number of directories to create (may be 0)
// node is directory to create them from // node is directory to create them from

View File

@ -141,7 +141,7 @@ Note that the chunks will be buffered into memory.`,
Name: "expose_onenote_files", Name: "expose_onenote_files",
Help: `Set to make OneNote files show up in directory listings. Help: `Set to make OneNote files show up in directory listings.
By default rclone will hide OneNote files in directory listings because By default, rclone will hide OneNote files in directory listings because
operations like "Open" and "Update" won't work on them. But this operations like "Open" and "Update" won't work on them. But this
behaviour may also prevent you from deleting them. If you want to behaviour may also prevent you from deleting them. If you want to
delete OneNote files or otherwise want them to show up in directory delete OneNote files or otherwise want them to show up in directory

View File

@ -118,7 +118,7 @@ Use this to set additional HTTP headers for all transactions
The input format is comma separated list of key,value pairs. Standard The input format is comma separated list of key,value pairs. Standard
[CSV encoding](https://godoc.org/encoding/csv) may be used. [CSV encoding](https://godoc.org/encoding/csv) may be used.
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'. You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
`, `,

View File

@ -9,7 +9,7 @@ provides:
maintainer: "Nick Craig-Wood <nick@craig-wood.com>" maintainer: "Nick Craig-Wood <nick@craig-wood.com>"
description: | description: |
Rclone - "rsync for cloud storage" Rclone - "rsync for cloud storage"
is a command line program to sync files and directories to and is a command-line program to sync files and directories to and
from most cloud providers. It can also mount, tree, ncdu and lots from most cloud providers. It can also mount, tree, ncdu and lots
of other useful things. of other useful things.
vendor: "rclone" vendor: "rclone"

View File

@ -76,7 +76,7 @@ Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Trashed: 104857602 Trashed: 104857602
Other: 8849156022 Other: 8849156022
A ` + "`--json`" + ` flag generates conveniently computer readable output, e.g. A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g.
{ {
"total": 18253611008, "total": 18253611008,

View File

@ -30,9 +30,9 @@ func init() {
var commandDefinition = &cobra.Command{ var commandDefinition = &cobra.Command{
Use: "backend <command> remote:path [opts] <args>", Use: "backend <command> remote:path [opts] <args>",
Short: `Run a backend specific command.`, Short: `Run a backend-specific command.`,
Long: ` Long: `
This runs a backend specific command. The commands themselves (except This runs a backend-specific command. The commands themselves (except
for "help" and "features") are defined by the backends and you should for "help" and "features") are defined by the backends and you should
see the backend docs for definitions. see the backend docs for definitions.

View File

@ -136,7 +136,7 @@ var commandDefinition = &cobra.Command{
Short: `Checks the files in the source and destination match.`, Short: `Checks the files in the source and destination match.`,
Long: strings.ReplaceAll(` Long: strings.ReplaceAll(`
Checks the files in the source and destination match. It compares Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files which don't sizes and hashes (MD5 or SHA1) and logs a report of files that don't
match. It doesn't alter the source or destination. match. It doesn't alter the source or destination.
If you supply the |--size-only| flag, it will only compare the sizes not If you supply the |--size-only| flag, it will only compare the sizes not

View File

@ -214,7 +214,7 @@ var configCreateCommand = &cobra.Command{
Create a new remote of |name| with |type| and options. The options Create a new remote of |name| with |type| and options. The options
should be passed in pairs of |key| |value| or as |key=value|. should be passed in pairs of |key| |value| or as |key=value|.
For example to make a swift remote of name myremote using auto config For example, to make a swift remote of name myremote using auto config
you would do: you would do:
rclone config create myremote swift env_auth true rclone config create myremote swift env_auth true
@ -277,7 +277,7 @@ var configUpdateCommand = &cobra.Command{
Update an existing remote's options. The options should be passed in Update an existing remote's options. The options should be passed in
pairs of |key| |value| or as |key=value|. pairs of |key| |value| or as |key=value|.
For example to update the env_auth field of a remote of name myremote For example, to update the env_auth field of a remote of name myremote
you would do: you would do:
rclone config update myremote env_auth true rclone config update myremote env_auth true
@ -317,7 +317,7 @@ Update an existing remote's password. The password
should be passed in pairs of |key| |password| or as |key=password|. should be passed in pairs of |key| |password| or as |key=password|.
The |password| should be passed in in clear (unobscured). The |password| should be passed in in clear (unobscured).
For example to set password of a remote of name myremote you would do: For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword rclone config password myremote fieldname=mypassword

View File

@ -20,7 +20,7 @@ func init() {
cmd.Root.AddCommand(commandDefinition) cmd.Root.AddCommand(commandDefinition)
cmdFlag := commandDefinition.Flags() cmdFlag := commandDefinition.Flags()
flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename") flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename")
flags.BoolVarP(cmdFlag, &byHash, "by-hash", "", false, "Find indentical hashes rather than names") flags.BoolVarP(cmdFlag, &byHash, "by-hash", "", false, "Find identical hashes rather than names")
} }
var commandDefinition = &cobra.Command{ var commandDefinition = &cobra.Command{
@ -47,7 +47,7 @@ name. It will do this iteratively until all the identically named
directories have been merged. directories have been merged.
Next, if deduping by name, for every group of duplicate file names / Next, if deduping by name, for every group of duplicate file names /
hashes, it will delete all but one identical files it finds without hashes, it will delete all but one identical file it finds without
confirmation. This means that for most duplicated files the ` + confirmation. This means that for most duplicated files the ` +
"`dedupe`" + ` command will not be interactive. "`dedupe`" + ` command will not be interactive.
@ -59,7 +59,7 @@ identical if they have the same size (any hash will be ignored). This
can be useful on crypt backends which do not support hashes. can be useful on crypt backends which do not support hashes.
Next rclone will resolve the remaining duplicates. Exactly which Next rclone will resolve the remaining duplicates. Exactly which
action is taken depends on the dedupe mode. By default rclone will action is taken depends on the dedupe mode. By default, rclone will
interactively query the user for each one. interactively query the user for each one.
**Important**: Since this can cause data loss, test first with the **Important**: Since this can cause data loss, test first with the
@ -126,7 +126,7 @@ Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" +
* ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different. * ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different.
* ` + "`" + `--dedupe-mode list` + "`" + ` - lists duplicate dirs and files only and changes nothing. * ` + "`" + `--dedupe-mode list` + "`" + ` - lists duplicate dirs and files only and changes nothing.
For example to rename all the identically named photos in your Google Photos directory, do For example, to rename all the identically named photos in your Google Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos" rclone dedupe --dedupe-mode rename "drive:Google Photos"

View File

@ -17,15 +17,15 @@ There are several related list commands
* |lsf| to list objects and directories in easy to parse format * |lsf| to list objects and directories in easy to parse format
* |lsjson| to list objects and directories in JSON format * |lsjson| to list objects and directories in JSON format
|ls|,|lsl|,|lsd| are designed to be human readable. |ls|,|lsl|,|lsd| are designed to be human-readable.
|lsf| is designed to be human and machine readable. |lsf| is designed to be human and machine-readable.
|lsjson| is designed to be machine readable. |lsjson| is designed to be machine-readable.
Note that |ls| and |lsl| recurse by default - use |--max-depth 1| to stop the recursion. Note that |ls| and |lsl| recurse by default - use |--max-depth 1| to stop the recursion.
The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default - use |-R| to make them recurse. The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default - use |-R| to make them recurse.
Listing a non existent directory will produce an error except for Listing a non-existent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs - remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes). the bucket-based remotes).
`, "|", "`") `, "|", "`")

View File

@ -93,13 +93,13 @@ can be returned as an empty string if it isn't available on the object
the object and "UNSUPPORTED" if that object does not support that hash the object and "UNSUPPORTED" if that object does not support that hash
type. type.
For example to emulate the md5sum command you can use For example, to emulate the md5sum command you can use
rclone lsf -R --hash MD5 --format hp --separator " " --files-only . rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
Eg Eg
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
7908e352297f0f530b84a756f188baa3 bevajer5jef 7908e352297f0f530b84a756f188baa3 bevajer5jef
cd65ac234e6fea5925974a51cdd865cc canole cd65ac234e6fea5925974a51cdd865cc canole
03b5341b4f234b9d984d03ad076bae91 diwogej7 03b5341b4f234b9d984d03ad076bae91 diwogej7
@ -134,7 +134,7 @@ Eg
Note that the --absolute parameter is useful for making lists of files Note that the --absolute parameter is useful for making lists of files
to pass to an rclone copy with the --files-from-raw flag. to pass to an rclone copy with the --files-from-raw flag.
For example to find all the files modified within one day and copy For example, to find all the files modified within one day and copy
those only (without traversing the whole directory structure): those only (without traversing the whole directory structure):
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files

View File

@ -93,7 +93,7 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
When used without --recursive the Path will always be the same as Name. When used without --recursive the Path will always be the same as Name.
If the directory is a bucket in a bucket based backend, then If the directory is a bucket in a bucket-based backend, then
"IsBucket" will be set to true. This key won't be present unless it is "IsBucket" will be set to true. This key won't be present unless it is
"true". "true".

View File

@ -65,7 +65,7 @@ at all, then 1 PiB is set as both the total and the free size.
To run rclone @ on Windows, you will need to To run rclone @ on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/). download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open source [WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone systems for Windows. It provides a FUSE emulation layer which rclone
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse). uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
@ -235,7 +235,7 @@ applications won't work with their files on an rclone mount without
|--vfs-cache-mode writes| or |--vfs-cache-mode full|. |--vfs-cache-mode writes| or |--vfs-cache-mode full|.
See the [VFS File Caching](#vfs-file-caching) section for more info. See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty Hubic) do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of directories will have a tendency to disappear once they fall out of
the directory cache. the directory cache.

View File

@ -18,7 +18,7 @@ func init() {
var commandDefinition = &cobra.Command{ var commandDefinition = &cobra.Command{
Use: "obscure password", Use: "obscure password",
Short: `Obscure password for use in the rclone config file.`, Short: `Obscure password for use in the rclone config file.`,
Long: `In the rclone config file, human readable passwords are Long: `In the rclone config file, human-readable passwords are
obscured. Obscuring them is done by encrypting them and writing them obscured. Obscuring them is done by encrypting them and writing them
out in base64. This is **not** a secure way of encrypting these out in base64. This is **not** a secure way of encrypting these
passwords as rclone can decrypt them - it is to prevent "eyedropping" passwords as rclone can decrypt them - it is to prevent "eyedropping"

View File

@ -51,7 +51,7 @@ var Command = &cobra.Command{
over HTTP. This allows restic to use rclone as a data storage over HTTP. This allows restic to use rclone as a data storage
mechanism for cloud providers that restic does not support directly. mechanism for cloud providers that restic does not support directly.
[Restic](https://restic.net/) is a command line program for doing [Restic](https://restic.net/) is a command-line program for doing
backups. backups.
The server will log errors. Use -v to see access logs. The server will log errors. Use -v to see access logs.

View File

@ -82,7 +82,7 @@ For example
subdir subdir
file4 file4
file5 file5
1 directories, 5 files 1 directories, 5 files
You can use any of the filtering options with the tree command (e.g. You can use any of the filtering options with the tree command (e.g.

View File

@ -5,7 +5,7 @@ rclone.
See the `content` directory for the docs in markdown format. See the `content` directory for the docs in markdown format.
Note that some of the docs are auto generated - these should have a DO Note that some of the docs are auto-generated - these should have a DO
NOT EDIT marker near the top. NOT EDIT marker near the top.
Use [hugo](https://github.com/spf13/hugo) to build the website. Use [hugo](https://github.com/spf13/hugo) to build the website.
@ -28,7 +28,7 @@ so it is easy to tweak stuff.
├── config.json - hugo config file ├── config.json - hugo config file
├── content - docs and backend docs ├── content - docs and backend docs
│   ├── _index.md - the front page of rclone.org │   ├── _index.md - the front page of rclone.org
│   ├── commands - auto generated command docs - DO NOT EDIT │   ├── commands - auto-generated command docs - DO NOT EDIT
├── i18n ├── i18n
│   └── en.toml - hugo multilingual config │   └── en.toml - hugo multilingual config
├── layouts - how the markdown gets converted into HTML ├── layouts - how the markdown gets converted into HTML

View File

@ -19,8 +19,8 @@ notoc: true
## About rclone {#about} ## About rclone {#about}
Rclone is a command line program to manage files on cloud storage. It Rclone is a command-line program to manage files on cloud storage. It
is a feature rich alternative to cloud vendors' web storage is a feature-rich alternative to cloud vendors' web storage
interfaces. [Over 40 cloud storage products](#providers) support interfaces. [Over 40 cloud storage products](#providers) support
rclone including S3 object stores, business & consumer file storage rclone including S3 object stores, business & consumer file storage
services, as well as standard transfer protocols. services, as well as standard transfer protocols.
@ -43,7 +43,7 @@ bandwidth use and transfers from one provider to another without
using local disk. using local disk.
Virtual backends wrap local and cloud file systems to apply Virtual backends wrap local and cloud file systems to apply
[encryption](/crypt/), [encryption](/crypt/),
[compression](/compress/), [compression](/compress/),
[chunking](/chunker/), [chunking](/chunker/),
[hashing](/hasher/) and [hashing](/hasher/) and
@ -58,13 +58,13 @@ macOS, linux and FreeBSD, and also serves these over
[FTP](/commands/rclone_serve_ftp/) and [FTP](/commands/rclone_serve_ftp/) and
[DLNA](/commands/rclone_serve_dlna/). [DLNA](/commands/rclone_serve_dlna/).
Rclone is mature, open source software originally inspired by rsync Rclone is mature, open-source software originally inspired by rsync
and written in [Go](https://golang.org). The friendly support and written in [Go](https://golang.org). The friendly support
community are familiar with varied use cases. Official Ubuntu, Debian, community is familiar with varied use cases. Official Ubuntu, Debian,
Fedora, Brew and Chocolatey repos. include rclone. For the latest Fedora, Brew and Chocolatey repos. include rclone. For the latest
version [downloading from rclone.org](/downloads/) is recommended. version [downloading from rclone.org](/downloads/) is recommended.
Rclone is widely used on Linux, Windows and Mac. Third party Rclone is widely used on Linux, Windows and Mac. Third-party
developers create innovative backup, restore, GUI and business developers create innovative backup, restore, GUI and business
process solutions using the rclone command line or API. process solutions using the rclone command line or API.
@ -77,7 +77,7 @@ Rclone helps you:
- Backup (and encrypt) files to cloud storage - Backup (and encrypt) files to cloud storage
- Restore (and decrypt) files from cloud storage - Restore (and decrypt) files from cloud storage
- Mirror cloud data to other cloud services or locally - Mirror cloud data to other cloud services or locally
- Migrate data to cloud, or between cloud storage vendors - Migrate data to the cloud, or between cloud storage vendors
- Mount multiple, encrypted, cached or diverse cloud storage as a disk - Mount multiple, encrypted, cached or diverse cloud storage as a disk
- Analyse and account for data held on cloud storage using [lsf](/commands/rclone_lsf/), [ljson](/commands/rclone_lsjson/), [size](/commands/rclone_size/), [ncdu](/commands/rclone_ncdu/) - Analyse and account for data held on cloud storage using [lsf](/commands/rclone_lsf/), [ljson](/commands/rclone_lsjson/), [size](/commands/rclone_size/), [ncdu](/commands/rclone_ncdu/)
- [Union](/union/) file systems together to present multiple local and/or cloud file systems as one - [Union](/union/) file systems together to present multiple local and/or cloud file systems as one

View File

@ -36,7 +36,7 @@ which pass through it.
Since rclone doesn't currently have its own Amazon Drive credentials Since rclone doesn't currently have its own Amazon Drive credentials
so you will either need to have your own `client_id` and so you will either need to have your own `client_id` and
`client_secret` with Amazon Drive, or use a third party oauth proxy `client_secret` with Amazon Drive, or use a third-party oauth proxy
in which case you will need to enter `client_id`, `client_secret`, in which case you will need to enter `client_id`, `client_secret`,
`auth_url` and `token_url`. `auth_url` and `token_url`.
@ -148,7 +148,7 @@ as they can't be used in JSON strings.
Any files you delete with rclone will end up in the trash. Amazon Any files you delete with rclone will end up in the trash. Amazon
don't provide an API to permanently delete files, nor to empty the don't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Amazon's apps or via trash, so you will have to do that with one of Amazon's apps or via
the Amazon Drive website. As of November 17, 2016, files are the Amazon Drive website. As of November 17, 2016, files are
automatically deleted by Amazon from the trash after 30 days. automatically deleted by Amazon from the trash after 30 days.
### Using with non `.com` Amazon accounts ### Using with non `.com` Amazon accounts

View File

@ -22,11 +22,11 @@ Millions of files in a directory tends to occur on bucket-based remotes
(e.g. S3 buckets) since those remotes do not segregate subdirectories within (e.g. S3 buckets) since those remotes do not segregate subdirectories within
the bucket. the bucket.
### Bucket based remotes and folders ### Bucket-based remotes and folders
Bucket based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of Bucket-based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of
directories. Rclone therefore cannot create directories in them which directories. Rclone therefore cannot create directories in them which
means that empty directories on a bucket based remote will tend to means that empty directories on a bucket-based remote will tend to
disappear. disappear.
Some software creates empty keys ending in `/` as directory markers. Some software creates empty keys ending in `/` as directory markers.

View File

@ -376,7 +376,7 @@ description: "Rclone Changelog"
* New Features * New Features
* [Connection strings](/docs/#connection-strings) * [Connection strings](/docs/#connection-strings)
* Config parameters can now be passed as part of the remote name as a connection string. * Config parameters can now be passed as part of the remote name as a connection string.
* For example to do the equivalent of `--drive-shared-with-me` use `drive,shared_with_me:` * For example, to do the equivalent of `--drive-shared-with-me` use `drive,shared_with_me:`
* Make sure we don't save on the fly remote config to the config file (Nick Craig-Wood) * Make sure we don't save on the fly remote config to the config file (Nick Craig-Wood)
* Make sure backends with additional config have a different name for caching (Nick Craig-Wood) * Make sure backends with additional config have a different name for caching (Nick Craig-Wood)
* This work was sponsored by CERN, through the [CS3MESH4EOSC Project](https://cs3mesh4eosc.eu/). * This work was sponsored by CERN, through the [CS3MESH4EOSC Project](https://cs3mesh4eosc.eu/).
@ -629,7 +629,7 @@ description: "Rclone Changelog"
* And thanks to these people for many doc fixes too numerous to list * And thanks to these people for many doc fixes too numerous to list
* Ameer Dawood, Antoine GIRARD, Bob Bagwill, Christopher Stewart * Ameer Dawood, Antoine GIRARD, Bob Bagwill, Christopher Stewart
* CokeMine, David, Dov Murik, Durval Menezes, Evan Harris, gtorelly * CokeMine, David, Dov Murik, Durval Menezes, Evan Harris, gtorelly
* Ilyess Bachiri, Janne Johansson, Kerry Su, Marcin Zelent, * Ilyess Bachiri, Janne Johansson, Kerry Su, Marcin Zelent,
* Martin Michlmayr, Milly, Sơn Trần-Nguyễn * Martin Michlmayr, Milly, Sơn Trần-Nguyễn
* Mount * Mount
* Update systemd status with cache stats (Hekmon) * Update systemd status with cache stats (Hekmon)
@ -1174,7 +1174,7 @@ all the docs and Edward Barker for helping re-write the front page.
* [Union](/union/) re-write to have multiple writable remotes (Max Sum) * [Union](/union/) re-write to have multiple writable remotes (Max Sum)
* [Seafile](/seafile) for Seafile server (Fred @creativeprojects) * [Seafile](/seafile) for Seafile server (Fred @creativeprojects)
* New commands * New commands
* backend: command for backend specific commands (see backends) (Nick Craig-Wood) * backend: command for backend-specific commands (see backends) (Nick Craig-Wood)
* cachestats: Deprecate in favour of `rclone backend stats cache:` (Nick Craig-Wood) * cachestats: Deprecate in favour of `rclone backend stats cache:` (Nick Craig-Wood)
* dbhashsum: Deprecate in favour of `rclone hashsum DropboxHash` (Nick Craig-Wood) * dbhashsum: Deprecate in favour of `rclone hashsum DropboxHash` (Nick Craig-Wood)
* New Features * New Features
@ -1211,7 +1211,7 @@ all the docs and Edward Barker for helping re-write the front page.
* lsjson: Add `--hash-type` parameter and use it in lsf to speed up hashing (Nick Craig-Wood) * lsjson: Add `--hash-type` parameter and use it in lsf to speed up hashing (Nick Craig-Wood)
* rc * rc
* Add `-o`/`--opt` and `-a`/`--arg` for more structured input (Nick Craig-Wood) * Add `-o`/`--opt` and `-a`/`--arg` for more structured input (Nick Craig-Wood)
* Implement `backend/command` for running backend specific commands remotely (Nick Craig-Wood) * Implement `backend/command` for running backend-specific commands remotely (Nick Craig-Wood)
* Add `mount/mount` command for starting `rclone mount` via the API (Chaitanya) * Add `mount/mount` command for starting `rclone mount` via the API (Chaitanya)
* rcd: Add Prometheus metrics support (Gary Kim) * rcd: Add Prometheus metrics support (Gary Kim)
* serve http * serve http
@ -1638,7 +1638,7 @@ all the docs and Edward Barker for helping re-write the front page.
* Add flag `--vfs-case-insensitive` for windows/macOS mounts (Ivan Andreev) * Add flag `--vfs-case-insensitive` for windows/macOS mounts (Ivan Andreev)
* Make objects of unknown size readable through the VFS (Nick Craig-Wood) * Make objects of unknown size readable through the VFS (Nick Craig-Wood)
* Move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush() (Brett Dutro) * Move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush() (Brett Dutro)
* Stop empty dirs disappearing when renamed on bucket based remotes (Nick Craig-Wood) * Stop empty dirs disappearing when renamed on bucket-based remotes (Nick Craig-Wood)
* Stop change notify polling clearing so much of the directory cache (Nick Craig-Wood) * Stop change notify polling clearing so much of the directory cache (Nick Craig-Wood)
* Azure Blob * Azure Blob
* Disable logging to the Windows event log (Nick Craig-Wood) * Disable logging to the Windows event log (Nick Craig-Wood)
@ -1791,7 +1791,7 @@ all the docs and Edward Barker for helping re-write the front page.
* rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood) * rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood)
* Mount * Mount
* Default `--daemon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood) * Default `--daemon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)
* Update docs to show mounting from root OK for bucket based (Nick Craig-Wood) * Update docs to show mounting from root OK for bucket-based (Nick Craig-Wood)
* Remove nonseekable flag from write files (Nick Craig-Wood) * Remove nonseekable flag from write files (Nick Craig-Wood)
* VFS * VFS
* Make write without cache more efficient (Nick Craig-Wood) * Make write without cache more efficient (Nick Craig-Wood)
@ -1858,7 +1858,7 @@ all the docs and Edward Barker for helping re-write the front page.
* controlled with `--multi-thread-cutoff` and `--multi-thread-streams` * controlled with `--multi-thread-cutoff` and `--multi-thread-streams`
* Use rclone.conf from rclone executable directory to enable portable use (albertony) * Use rclone.conf from rclone executable directory to enable portable use (albertony)
* Allow sync of a file and a directory with the same name (forgems) * Allow sync of a file and a directory with the same name (forgems)
* this is common on bucket based remotes, e.g. s3, gcs * this is common on bucket-based remotes, e.g. s3, gcs
* Add `--ignore-case-sync` for forced case insensitivity (garry415) * Add `--ignore-case-sync` for forced case insensitivity (garry415)
* Implement `--stats-one-line-date` and `--stats-one-line-date-format` (Peter Berbec) * Implement `--stats-one-line-date` and `--stats-one-line-date-format` (Peter Berbec)
* Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood) * Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood)
@ -1872,7 +1872,7 @@ all the docs and Edward Barker for helping re-write the front page.
* lsjson * lsjson
* Added EncryptedPath to output (calisro) * Added EncryptedPath to output (calisro)
* Support showing the Tier of the object (Nick Craig-Wood) * Support showing the Tier of the object (Nick Craig-Wood)
* Add IsBucket field for bucket based remote listing of the root (Nick Craig-Wood) * Add IsBucket field for bucket-based remote listing of the root (Nick Craig-Wood)
* rc * rc
* Add `--loopback` flag to run commands directly without a server (Nick Craig-Wood) * Add `--loopback` flag to run commands directly without a server (Nick Craig-Wood)
* Add operations/fsinfo: Return information about the remote (Nick Craig-Wood) * Add operations/fsinfo: Return information about the remote (Nick Craig-Wood)
@ -1888,7 +1888,7 @@ all the docs and Edward Barker for helping re-write the front page.
* Make move and copy individual files obey `--backup-dir` (Nick Craig-Wood) * Make move and copy individual files obey `--backup-dir` (Nick Craig-Wood)
* If `--ignore-checksum` is in effect, don't calculate checksum (Nick Craig-Wood) * If `--ignore-checksum` is in effect, don't calculate checksum (Nick Craig-Wood)
* moveto: Fix case-insensitive same remote move (Gary Kim) * moveto: Fix case-insensitive same remote move (Gary Kim)
* rc: Fix serving bucket based objects with `--rc-serve` (Nick Craig-Wood) * rc: Fix serving bucket-based objects with `--rc-serve` (Nick Craig-Wood)
* serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim) * serve webdav: Fix serveDir not being updated with changes from webdav (Gary Kim)
* Mount * Mount
* Fix poll interval documentation (Animosity022) * Fix poll interval documentation (Animosity022)
@ -2573,7 +2573,7 @@ Point release to fix hubic and azureblob backends.
* Always forget parent dir for notifications * Always forget parent dir for notifications
* Integrate with Plex websocket * Integrate with Plex websocket
* Add rc cache/stats (seuffert) * Add rc cache/stats (seuffert)
* Add info log on notification * Add info log on notification
* Box * Box
* Fix failure reading large directories - parse file/directory size as float * Fix failure reading large directories - parse file/directory size as float
* Dropbox * Dropbox
@ -2754,7 +2754,7 @@ Point release to fix hubic and azureblob backends.
* Fix following of symlinks * Fix following of symlinks
* Fix reading config file outside of Fs setup * Fix reading config file outside of Fs setup
* Fix reading $USER in username fallback not $HOME * Fix reading $USER in username fallback not $HOME
* Fix running under crontab - Use correct OS way of reading username * Fix running under crontab - Use correct OS way of reading username
* Swift * Swift
* Fix refresh of authentication token * Fix refresh of authentication token
* in v1.39 a bug was introduced which ignored new tokens - this fixes it * in v1.39 a bug was introduced which ignored new tokens - this fixes it
@ -2917,7 +2917,7 @@ Point release to fix hubic and azureblob backends.
* HTTP - thanks to Vasiliy Tolstov * HTTP - thanks to Vasiliy Tolstov
* New commands * New commands
* rclone ncdu - for exploring a remote with a text based user interface. * rclone ncdu - for exploring a remote with a text based user interface.
* rclone lsjson - for listing with a machine readable output * rclone lsjson - for listing with a machine-readable output
* rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox) * rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox)
* New Features * New Features
* Implement --fast-list flag * Implement --fast-list flag
@ -3181,7 +3181,7 @@ Point release to fix hubic and azureblob backends.
* Unix: implement `-x`/`--one-file-system` to stay on a single file system * Unix: implement `-x`/`--one-file-system` to stay on a single file system
* thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana * thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
* Windows: ignore the symlink bit on files * Windows: ignore the symlink bit on files
* Windows: Ignore directory based junction points * Windows: Ignore directory-based junction points
* B2 * B2
* Make sure each upload has at least one upload slot - fixes strange upload stats * Make sure each upload has at least one upload slot - fixes strange upload stats
* Fix uploads when using crypt * Fix uploads when using crypt
@ -3284,7 +3284,7 @@ Point release to fix hubic and azureblob backends.
* Retry more errors * Retry more errors
* Add --ignore-size flag - for uploading images to onedrive * Add --ignore-size flag - for uploading images to onedrive
* Log -v output to stdout by default * Log -v output to stdout by default
* Display the transfer stats in more human readable form * Display the transfer stats in more human-readable form
* Make 0 size files specifiable with `--max-size 0b` * Make 0 size files specifiable with `--max-size 0b`
* Add `b` suffix so we can specify bytes in --bwlimit, --min-size, etc. * Add `b` suffix so we can specify bytes in --bwlimit, --min-size, etc.
* Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz * Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz

View File

@ -18,7 +18,7 @@ a remote.
First check your chosen remote is working - we'll call it `remote:path` here. First check your chosen remote is working - we'll call it `remote:path` here.
Note that anything inside `remote:path` will be chunked and anything outside Note that anything inside `remote:path` will be chunked and anything outside
won't. This means that if you are using a bucket based remote (e.g. S3, B2, swift) won't. This means that if you are using a bucket-based remote (e.g. S3, B2, swift)
then you should probably put the bucket in the remote `s3:bucket`. then you should probably put the bucket in the remote `s3:bucket`.
Now configure `chunker` using `rclone config`. We will call this one `overlay` Now configure `chunker` using `rclone config`. We will call this one `overlay`

View File

@ -224,7 +224,7 @@ it when needed.
If you intend to use the wrapped remote both directly for keeping If you intend to use the wrapped remote both directly for keeping
unencrypted content, as well as through a crypt remote for encrypted unencrypted content, as well as through a crypt remote for encrypted
content, it is recommended to point the crypt remote to a separate content, it is recommended to point the crypt remote to a separate
directory within the wrapped remote. If you use a bucket based storage directory within the wrapped remote. If you use a bucket-based storage
system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is generally system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is generally
advisable to wrap the crypt remote around a specific bucket (`s3:bucket`). advisable to wrap the crypt remote around a specific bucket (`s3:bucket`).
If wrapping around the entire root of the storage (`s3:`), and use the If wrapping around the entire root of the storage (`s3:`), and use the

View File

@ -278,7 +278,7 @@ This will make `parameter` be `with"quote` and `parameter2` be
`with'quote`. `with'quote`.
If you leave off the `=parameter` then rclone will substitute `=true` If you leave off the `=parameter` then rclone will substitute `=true`
which works very well with flags. For example to use s3 configured in which works very well with flags. For example, to use s3 configured in
the environment you could use: the environment you could use:
rclone lsd :s3,env_auth: rclone lsd :s3,env_auth:
@ -485,7 +485,7 @@ it will give an error.
This option controls the bandwidth limit. For example This option controls the bandwidth limit. For example
--bwlimit 10M --bwlimit 10M
would mean limit the upload and download bandwidth to 10 MiB/s. would mean limit the upload and download bandwidth to 10 MiB/s.
**NB** this is **bytes** per second not **bits** per second. To use a **NB** this is **bytes** per second not **bits** per second. To use a
single limit, specify the desired bandwidth in KiB/s, or use a single limit, specify the desired bandwidth in KiB/s, or use a
@ -664,12 +664,12 @@ they are incorrect as it would normally.
### --compare-dest=DIR ### ### --compare-dest=DIR ###
When using `sync`, `copy` or `move` DIR is checked in addition to the When using `sync`, `copy` or `move` DIR is checked in addition to the
destination for files. If a file identical to the source is found that destination for files. If a file identical to the source is found that
file is NOT copied from source. This is useful to copy just files that file is NOT copied from source. This is useful to copy just files that
have changed since the last backup. have changed since the last backup.
You must use the same remote as the destination of the sync. The You must use the same remote as the destination of the sync. The
compare directory must not overlap the destination directory. compare directory must not overlap the destination directory.
See `--copy-dest` and `--backup-dir`. See `--copy-dest` and `--backup-dir`.
@ -772,9 +772,9 @@ connection to go through to a remote object storage system. It is
### --copy-dest=DIR ### ### --copy-dest=DIR ###
When using `sync`, `copy` or `move` DIR is checked in addition to the When using `sync`, `copy` or `move` DIR is checked in addition to the
destination for files. If a file identical to the source is found that destination for files. If a file identical to the source is found that
file is server-side copied from DIR to the destination. This is useful file is server-side copied from DIR to the destination. This is useful
for incremental backup. for incremental backup.
The remote in use must support server-side copy and you must The remote in use must support server-side copy and you must
@ -951,7 +951,7 @@ default, and responds to key `u` for toggling human-readable format.
### --ignore-case-sync ### ### --ignore-case-sync ###
Using this option will cause rclone to ignore the case of the files Using this option will cause rclone to ignore the case of the files
when synchronizing so files will not be copied/synced when the when synchronizing so files will not be copied/synced when the
existing filenames are the same, even if the casing is different. existing filenames are the same, even if the casing is different.
@ -1097,7 +1097,7 @@ warnings and significant events.
### --use-json-log ### ### --use-json-log ###
This switches the log format to JSON for rclone. The fields of json log This switches the log format to JSON for rclone. The fields of json log
are level, msg, source, time. are level, msg, source, time.
### --low-level-retries NUMBER ### ### --low-level-retries NUMBER ###
@ -1479,7 +1479,7 @@ Disable retries with `--retries 1`.
### --retries-sleep=TIME ### ### --retries-sleep=TIME ###
This sets the interval between each retry specified by `--retries` This sets the interval between each retry specified by `--retries`
The default is `0`. Use `0` to disable. The default is `0`. Use `0` to disable.
@ -1516,9 +1516,9 @@ Note that on macOS you can send a SIGINFO (which is normally ctrl-T in
the terminal) to make the stats print immediately. the terminal) to make the stats print immediately.
### --stats-file-name-length integer ### ### --stats-file-name-length integer ###
By default, the `--stats` output will truncate file names and paths longer By default, the `--stats` output will truncate file names and paths longer
than 40 characters. This is equivalent to providing than 40 characters. This is equivalent to providing
`--stats-file-name-length 40`. Use `--stats-file-name-length 0` to disable `--stats-file-name-length 40`. Use `--stats-file-name-length 0` to disable
any truncation of file names printed by stats. any truncation of file names printed by stats.
### --stats-log-level string ### ### --stats-log-level string ###
@ -1562,14 +1562,14 @@ The default is `bytes`.
### --suffix=SUFFIX ### ### --suffix=SUFFIX ###
When using `sync`, `copy` or `move` any files which would have been When using `sync`, `copy` or `move` any files which would have been
overwritten or deleted will have the suffix added to them. If there overwritten or deleted will have the suffix added to them. If there
is a file with the same path (after the suffix has been added), then is a file with the same path (after the suffix has been added), then
it will be overwritten. it will be overwritten.
The remote in use must support server-side move or copy and you must The remote in use must support server-side move or copy and you must
use the same remote as the destination of the sync. use the same remote as the destination of the sync.
This is for use with files to add the suffix in the current directory This is for use with files to add the suffix in the current directory
or with `--backup-dir`. See `--backup-dir` for more info. or with `--backup-dir`. See `--backup-dir` for more info.
For example For example
@ -1633,7 +1633,7 @@ will depend on the backend. For HTTP based backends it is an HTTP
PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip
transaction over TCP. transaction over TCP.
For example to limit rclone to 10 transactions per second use For example, to limit rclone to 10 transactions per second use
`--tpslimit 10`, or to 1 transaction every 2 seconds use `--tpslimit `--tpslimit 10`, or to 1 transaction every 2 seconds use `--tpslimit
0.5`. 0.5`.
@ -1749,7 +1749,7 @@ quickly using the least amount of memory.
However, some remotes have a way of listing all files beneath a However, some remotes have a way of listing all files beneath a
directory in one (or a small number) of transactions. These tend to directory in one (or a small number) of transactions. These tend to
be the bucket based remotes (e.g. S3, B2, GCS, Swift, Hubic). be the bucket-based remotes (e.g. S3, B2, GCS, Swift, Hubic).
If you use the `--fast-list` flag then rclone will use this method for If you use the `--fast-list` flag then rclone will use this method for
listing directories. This will have the following consequences for listing directories. This will have the following consequences for
@ -1898,8 +1898,8 @@ This option defaults to `false`.
Configuration Encryption Configuration Encryption
------------------------ ------------------------
Your configuration file contains information for logging in to Your configuration file contains information for logging in to
your cloud services. This means that you should keep your your cloud services. This means that you should keep your
`rclone.conf` file in a secure location. `rclone.conf` file in a secure location.
If you are in an environment where that isn't possible, you can If you are in an environment where that isn't possible, you can
@ -1947,8 +1947,8 @@ encryption from your configuration.
There is no way to recover the configuration if you lose your password. There is no way to recover the configuration if you lose your password.
rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox)
which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate
your configuration with secret-key cryptography. your configuration with secret-key cryptography.
The password is SHA-256 hashed, which produces the key for secretbox. The password is SHA-256 hashed, which produces the key for secretbox.
The hashed password is not stored. The hashed password is not stored.
@ -2000,8 +2000,8 @@ script method of supplying the password enhances the security of
the config password considerably. the config password considerably.
If you are running rclone inside a script, unless you are using the If you are running rclone inside a script, unless you are using the
`--password-command` method, you might want to disable `--password-command` method, you might want to disable
password prompts. To do that, pass the parameter password prompts. To do that, pass the parameter
`--ask-password=false` to rclone. This will make rclone fail instead `--ask-password=false` to rclone. This will make rclone fail instead
of asking for a password if `RCLONE_CONFIG_PASS` doesn't contain of asking for a password if `RCLONE_CONFIG_PASS` doesn't contain
a valid password, and `--password-command` has not been supplied. a valid password, and `--password-command` has not been supplied.
@ -2039,9 +2039,9 @@ Write CPU profile to file. This can be analysed with `go tool pprof`.
The `--dump` flag takes a comma separated list of flags to dump info The `--dump` flag takes a comma separated list of flags to dump info
about. about.
Note that some headers including `Accept-Encoding` as shown may not Note that some headers including `Accept-Encoding` as shown may not
be correct in the request and the response may not show `Content-Encoding` be correct in the request and the response may not show `Content-Encoding`
if the go standard libraries auto gzip encoding was in effect. In this case if the go standard libraries auto gzip encoding was in effect. In this case
the body of the request will be gunzipped before showing it. the body of the request will be gunzipped before showing it.
The available flags are: The available flags are:
@ -2279,7 +2279,7 @@ this order and the first one with a value is used.
- Parameters in connection strings, e.g. `myRemote,skip_links:` - Parameters in connection strings, e.g. `myRemote,skip_links:`
- Flag values as supplied on the command line, e.g. `--skip-links` - Flag values as supplied on the command line, e.g. `--skip-links`
- Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_SKIP_LINKS` (see above). - Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_SKIP_LINKS` (see above).
- Backend specific environment vars, e.g. `RCLONE_LOCAL_SKIP_LINKS`. - Backend-specific environment vars, e.g. `RCLONE_LOCAL_SKIP_LINKS`.
- Backend generic environment vars, e.g. `RCLONE_SKIP_LINKS`. - Backend generic environment vars, e.g. `RCLONE_SKIP_LINKS`.
- Config file, e.g. `skip_links = true`. - Config file, e.g. `skip_links = true`.
- Default values, e.g. `false` - these can't be changed. - Default values, e.g. `false` - these can't be changed.

View File

@ -6,7 +6,7 @@ type: page
# {{< icon "fa fa-heart heart" >}} Donations to the rclone project # {{< icon "fa fa-heart heart" >}} Donations to the rclone project
Rclone is a free open source project with thousands of contributions Rclone is a free open-source project with thousands of contributions
from volunteers all round the world and I would like to thank all of from volunteers all round the world and I would like to thank all of
you for donating your time to the project. you for donating your time to the project.

View File

@ -190,7 +190,7 @@ issues with DNS resolution. See the [name resolution section in the go docs](htt
### The total size reported in the stats for a sync is wrong and keeps changing ### The total size reported in the stats for a sync is wrong and keeps changing
It is likely you have more than 10,000 files that need to be It is likely you have more than 10,000 files that need to be
synced. By default rclone only gets 10,000 files ahead in a sync so as synced. By default, rclone only gets 10,000 files ahead in a sync so as
not to use up too much memory. You can change this default with the not to use up too much memory. You can change this default with the
[--max-backlog](/docs/#max-backlog-n) flag. [--max-backlog](/docs/#max-backlog-n) flag.

View File

@ -386,7 +386,7 @@ statement. For more flexibility use the `--filter-from` flag.
### `--filter` - Add a file-filtering rule ### `--filter` - Add a file-filtering rule
Specifies path/file names to an rclone command, based on a single Specifies path/file names to an rclone command, based on a single
include or exclude rule, in `+` or `-` format. include or exclude rule, in `+` or `-` format.
This flag can be repeated. See above for the order filter flags are This flag can be repeated. See above for the order filter flags are
processed in. processed in.
@ -555,7 +555,7 @@ input to `--files-from-raw`.
### `--ignore-case` - make searches case insensitive ### `--ignore-case` - make searches case insensitive
By default rclone filter patterns are case sensitive. The `--ignore-case` By default, rclone filter patterns are case sensitive. The `--ignore-case`
flag makes all of the filters patterns on the command line case flag makes all of the filters patterns on the command line case
insensitive. insensitive.

View File

@ -17,7 +17,7 @@ rclone rcd --rc-web-gui
``` ```
This will produce logs like this and rclone needs to continue to run to serve the GUI: This will produce logs like this and rclone needs to continue to run to serve the GUI:
``` ```
2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip 2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip
2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip] 2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip]
@ -28,12 +28,12 @@ This will produce logs like this and rclone needs to continue to run to serve th
This assumes you are running rclone locally on your machine. It is This assumes you are running rclone locally on your machine. It is
possible to separate the rclone and the GUI - see below for details. possible to separate the rclone and the GUI - see below for details.
If you wish to check for updates then you can add `--rc-web-gui-update` If you wish to check for updates then you can add `--rc-web-gui-update`
to the command line. to the command line.
If you find your GUI broken, you may force it to update by add `--rc-web-gui-force-update`. If you find your GUI broken, you may force it to update by add `--rc-web-gui-force-update`.
By default, rclone will open your browser. Add `--rc-web-gui-no-open-browser` By default, rclone will open your browser. Add `--rc-web-gui-no-open-browser`
to disable this feature. to disable this feature.
## Using the GUI ## Using the GUI
@ -55,7 +55,7 @@ On the left hand side you will see a series of view buttons you can click on:
When you run the `rclone rcd --rc-web-gui` this is what happens When you run the `rclone rcd --rc-web-gui` this is what happens
- Rclone starts but only runs the remote control API ("rc"). - Rclone starts but only runs the remote control API ("rc").
- The API is bound to localhost with an auto generated username and password. - The API is bound to localhost with an auto-generated username and password.
- If the API bundle is missing then rclone will download it. - If the API bundle is missing then rclone will download it.
- rclone will start serving the files from the API bundle over the same port as the API - rclone will start serving the files from the API bundle over the same port as the API
- rclone will open the browser with a `login_token` so it can log straight in. - rclone will open the browser with a `login_token` so it can log straight in.

View File

@ -48,12 +48,12 @@ Copy binary file
sudo cp rclone /usr/bin/ sudo cp rclone /usr/bin/
sudo chown root:root /usr/bin/rclone sudo chown root:root /usr/bin/rclone
sudo chmod 755 /usr/bin/rclone sudo chmod 755 /usr/bin/rclone
Install manpage Install manpage
sudo mkdir -p /usr/local/share/man/man1 sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/ sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb sudo mandb
Run `rclone config` to setup. See [rclone config docs](/docs/) for more details. Run `rclone config` to setup. See [rclone config docs](/docs/) for more details.
@ -229,7 +229,7 @@ Instructions
1. `git clone https://github.com/stefangweichinger/ansible-rclone.git` into your local roles-directory 1. `git clone https://github.com/stefangweichinger/ansible-rclone.git` into your local roles-directory
2. add the role to the hosts you want rclone installed to: 2. add the role to the hosts you want rclone installed to:
``` ```
- hosts: rclone-hosts - hosts: rclone-hosts
roles: roles:
@ -346,7 +346,7 @@ your rclone command, as an alternative to scheduled task configured to run at st
##### Mount command built-in service integration #### ##### Mount command built-in service integration ####
For mount commands, Rclone has a built-in Windows service integration via the third party For mount commands, Rclone has a built-in Windows service integration via the third-party
WinFsp library it uses. Registering as a regular Windows service easy, as you just have to WinFsp library it uses. Registering as a regular Windows service easy, as you just have to
execute the built-in PowerShell command `New-Service` (requires administrative privileges). execute the built-in PowerShell command `New-Service` (requires administrative privileges).
@ -366,9 +366,9 @@ Windows standard methods for managing network drives. This is currently not
officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later
it should be possible through path rewriting as described [here](https://github.com/rclone/rclone/issues/3340). it should be possible through path rewriting as described [here](https://github.com/rclone/rclone/issues/3340).
##### Third party service integration #### ##### Third-party service integration #####
To Windows service running any rclone command, the excellent third party utility To Windows service running any rclone command, the excellent third-party utility
[NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used. [NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used.
It includes some advanced features such as adjusting process periority, defining It includes some advanced features such as adjusting process periority, defining
process environment variables, redirect to file anything written to stdout, and process environment variables, redirect to file anything written to stdout, and

View File

@ -107,7 +107,7 @@ Choose a number from below, or type in an existing value
1 > Archive 1 > Archive
2 > Links 2 > Links
3 > Sync 3 > Sync
Mountpoints> 1 Mountpoints> 1
-------------------- --------------------
[jotta] [jotta]
@ -200,7 +200,7 @@ as they can't be used in XML strings.
### Deleting files ### Deleting files
By default rclone will send all files to the trash when deleting files. They will be permanently By default, rclone will send all files to the trash when deleting files. They will be permanently
deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately
by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable.
Emptying the trash is supported by the [cleanup](/commands/rclone_cleanup/) command. Emptying the trash is supported by the [cleanup](/commands/rclone_cleanup/) command.

View File

@ -8,7 +8,7 @@ description: "Rclone docs for Memory backend"
The memory backend is an in RAM backend. It does not persist its The memory backend is an in RAM backend. It does not persist its
data - use the local backend for that. data - use the local backend for that.
The memory backend behaves like a bucket based remote (e.g. like The memory backend behaves like a bucket-based remote (e.g. like
s3). Because it has no parameters you can just use it with the s3). Because it has no parameters you can just use it with the
`:memory:` remote name. `:memory:` remote name.

View File

@ -406,7 +406,7 @@ remote itself may assign the MIME type.
## Optional Features ## ## Optional Features ##
All rclone remotes support a base command set. Other features depend All rclone remotes support a base command set. Other features depend
upon backend specific capabilities. upon backend-specific capabilities.
| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir | | Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir |
| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:|:--------:| | ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:|:--------:|
@ -428,7 +428,7 @@ upon backend specific capabilities.
| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | | Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
| Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| Mega | Yes | No | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Mega | Yes | No | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | | Memory | No | Yes | No | No | No | Yes | Yes | No | No | No |
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | | Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No |
| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | | OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
@ -529,4 +529,4 @@ See [rclone about command](https://rclone.org/commands/rclone_about/)
### EmptyDir ### ### EmptyDir ###
The remote supports empty directories. See [Limitations](/bugs/#limitations) The remote supports empty directories. See [Limitations](/bugs/#limitations)
for details. Most Object/Bucket based remotes do not support this. for details. Most Object/Bucket-based remotes do not support this.

View File

@ -55,7 +55,7 @@ This website may use social sharing buttons which help share web content directl
## Use of Cloud API User Data ## ## Use of Cloud API User Data ##
Rclone is a command line program to manage files on cloud storage. Its sole purpose is to access and manipulate user content in the [supported](/overview/) cloud storage systems from a local machine of the end user. For accessing the user content via the cloud provider API, Rclone uses authentication mechanisms, such as OAuth or HTTP Cookies, depending on the particular cloud provider offerings. Use of these authentication mechanisms and user data is governed by the privacy policies mentioned in the [Resources & Further Information](/privacy/#resources-further-information) section and followed by the privacy policy of Rclone. Rclone is a command-line program to manage files on cloud storage. Its sole purpose is to access and manipulate user content in the [supported](/overview/) cloud storage systems from a local machine of the end user. For accessing the user content via the cloud provider API, Rclone uses authentication mechanisms, such as OAuth or HTTP Cookies, depending on the particular cloud provider offerings. Use of these authentication mechanisms and user data is governed by the privacy policies mentioned in the [Resources & Further Information](/privacy/#resources-further-information) section and followed by the privacy policy of Rclone.
* Rclone provides the end user with access to their files available in a storage system associated by the authentication credentials via the publicly exposed API of the storage system. * Rclone provides the end user with access to their files available in a storage system associated by the authentication credentials via the publicly exposed API of the storage system.
* Rclone allows storing the authentication credentials on the user machine in the local configuration file. * Rclone allows storing the authentication credentials on the user machine in the local configuration file.

View File

@ -1632,7 +1632,7 @@ parameters or by supplying "Content-Type: application/json" and a JSON
blob in the body. There are examples of these below using `curl`. blob in the body. There are examples of these below using `curl`.
The response will be a JSON blob in the body of the response. This is The response will be a JSON blob in the body of the response. This is
formatted to be reasonably human readable. formatted to be reasonably human-readable.
### Error returns ### Error returns

View File

@ -151,7 +151,7 @@ Choose a number from below, or type in your own value
region> 1 region> 1
Endpoint for S3 API. Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region. Leave blank if using AWS to use the default endpoint for the region.
endpoint> endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only. Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia, or Pacific Northwest. 1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
@ -239,16 +239,16 @@ env_auth = false
access_key_id = XXX access_key_id = XXX
secret_access_key = YYY secret_access_key = YYY
region = us-east-1 region = us-east-1
endpoint = endpoint =
location_constraint = location_constraint =
acl = private acl = private
server_side_encryption = server_side_encryption =
storage_class = storage_class =
-------------------- --------------------
y) Yes this is OK y) Yes this is OK
e) Edit this remote e) Edit this remote
d) Delete this remote d) Delete this remote
y/e/d> y/e/d>
``` ```
### Modified time ### Modified time
@ -268,7 +268,7 @@ request as the metadata isn't returned in object listings.
#### Avoiding HEAD requests to read the modification time #### Avoiding HEAD requests to read the modification time
By default rclone will use the modification time of objects stored in By default, rclone will use the modification time of objects stored in
S3 for syncing. This is stored in object metadata which unfortunately S3 for syncing. This is stored in object metadata which unfortunately
takes an extra HEAD request to read which can be expensive (in time takes an extra HEAD request to read which can be expensive (in time
and money). and money).
@ -347,7 +347,7 @@ Note that `--fast-list` isn't required in the top-up sync.
#### Avoiding HEAD requests after PUT #### Avoiding HEAD requests after PUT
By default rclone will HEAD every object it uploads. It does this to By default, rclone will HEAD every object it uploads. It does this to
check the object got uploaded correctly. check the object got uploaded correctly.
You can disable this with the [--s3-no-head](#s3-no-head) option - see You can disable this with the [--s3-no-head](#s3-no-head) option - see
@ -513,7 +513,7 @@ Example policy:
"Effect": "Allow", "Effect": "Allow",
"Action": "s3:ListAllMyBuckets", "Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*" "Resource": "arn:aws:s3:::*"
} }
] ]
} }
``` ```
@ -1940,14 +1940,14 @@ up looking like this:
type = s3 type = s3
provider = AWS provider = AWS
env_auth = false env_auth = false
access_key_id = access_key_id =
secret_access_key = secret_access_key =
region = us-east-1 region = us-east-1
endpoint = endpoint =
location_constraint = location_constraint =
acl = private acl = private
server_side_encryption = server_side_encryption =
storage_class = storage_class =
``` ```
Then use it as normal with the name of the public bucket, e.g. Then use it as normal with the name of the public bucket, e.g.
@ -1983,7 +1983,7 @@ upload_cutoff = 0
### Ceph ### Ceph
[Ceph](https://ceph.com/) is an open source unified, distributed [Ceph](https://ceph.com/) is an open-source, unified, distributed
storage system designed for excellent performance, reliability and storage system designed for excellent performance, reliability and
scalability. It has an S3 compatible object storage interface. scalability. It has an S3 compatible object storage interface.
@ -2340,7 +2340,7 @@ location_constraint =
server_side_encryption = server_side_encryption =
``` ```
So once set up, for example to copy files into a bucket So once set up, for example, to copy files into a bucket
``` ```
rclone copy /path/to/files minio:bucket rclone copy /path/to/files minio:bucket

View File

@ -281,7 +281,7 @@ If the rate parameter is not supplied then the bandwidth is queried
The format of the parameter is exactly the same as passed to --bwlimit The format of the parameter is exactly the same as passed to --bwlimit
except only one bandwidth may be specified. except only one bandwidth may be specified.
In either case "rate" is returned as a human readable string, and In either case "rate" is returned as a human-readable string, and
"bytesPerSecond" is returned as a number. "bytesPerSecond" is returned as a number.
`, `,
}) })

View File

@ -154,7 +154,7 @@ func TestPin(t *testing.T) {
cleanup, create := mockNewFs(t) cleanup, create := mockNewFs(t)
defer cleanup() defer cleanup()
// Test pinning and unpinning non existent // Test pinning and unpinning non-existent
f := mockfs.NewFs(context.Background(), "mock", "/alien") f := mockfs.NewFs(context.Background(), "mock", "/alien")
Pin(f) Pin(f)
Unpin(f) Unpin(f)

View File

@ -99,7 +99,7 @@ func ParseRangeOption(s string) (po *RangeOption, err error) {
return &o, nil return &o, nil
} }
// String formats the option into human readable form // String formats the option into human-readable form
func (o *RangeOption) String() string { func (o *RangeOption) String() string {
return fmt.Sprintf("RangeOption(%d,%d)", o.Start, o.End) return fmt.Sprintf("RangeOption(%d,%d)", o.Start, o.End)
} }
@ -178,7 +178,7 @@ func (o *SeekOption) Header() (key string, value string) {
return key, value return key, value
} }
// String formats the option into human readable form // String formats the option into human-readable form
func (o *SeekOption) String() string { func (o *SeekOption) String() string {
return fmt.Sprintf("SeekOption(%d)", o.Offset) return fmt.Sprintf("SeekOption(%d)", o.Offset)
} }
@ -199,7 +199,7 @@ func (o *HTTPOption) Header() (key string, value string) {
return o.Key, o.Value return o.Key, o.Value
} }
// String formats the option into human readable form // String formats the option into human-readable form
func (o *HTTPOption) String() string { func (o *HTTPOption) String() string {
return fmt.Sprintf("HTTPOption(%q,%q)", o.Key, o.Value) return fmt.Sprintf("HTTPOption(%q,%q)", o.Key, o.Value)
} }
@ -220,7 +220,7 @@ func (o *HashesOption) Header() (key string, value string) {
return "", "" return "", ""
} }
// String formats the option into human readable form // String formats the option into human-readable form
func (o *HashesOption) String() string { func (o *HashesOption) String() string {
return fmt.Sprintf("HashesOption(%v)", o.Hashes) return fmt.Sprintf("HashesOption(%v)", o.Hashes)
} }
@ -239,7 +239,7 @@ func (o NullOption) Header() (key string, value string) {
return "", "" return "", ""
} }
// String formats the option into human readable form // String formats the option into human-readable form
func (o NullOption) String() string { func (o NullOption) String() string {
return fmt.Sprintf("NullOption()") return fmt.Sprintf("NullOption()")
} }

View File

@ -131,7 +131,7 @@ func newListJSON(ctx context.Context, fsrc fs.Fs, remote string, opt *ListJSONOp
features := fsrc.Features() features := fsrc.Features()
lj.canGetTier = features.GetTier lj.canGetTier = features.GetTier
lj.format = formatForPrecision(fsrc.Precision()) lj.format = formatForPrecision(fsrc.Precision())
lj.isBucket = features.BucketBased && remote == "" && fsrc.Root() == "" // if bucket based remote listing the root mark directories as buckets lj.isBucket = features.BucketBased && remote == "" && fsrc.Root() == "" // if bucket-based remote listing the root mark directories as buckets
lj.showHash = opt.ShowHash lj.showHash = opt.ShowHash
lj.hashTypes = fsrc.Hashes().Array() lj.hashTypes = fsrc.Hashes().Array()
if len(opt.HashTypes) != 0 { if len(opt.HashTypes) != 0 {

View File

@ -943,7 +943,7 @@ func ListLong(ctx context.Context, f fs.Fs, w io.Writer) error {
}) })
} }
// hashSum returns the human readable hash for ht passed in. This may // hashSum returns the human-readable hash for ht passed in. This may
// be UNSUPPORTED or ERROR. If it isn't returning a valid hash it will // be UNSUPPORTED or ERROR. If it isn't returning a valid hash it will
// return an error. // return an error.
func hashSum(ctx context.Context, ht hash.Type, downloadFlag bool, o fs.Object) (string, error) { func hashSum(ctx context.Context, ht hash.Type, downloadFlag bool, o fs.Object) (string, error) {

View File

@ -119,7 +119,7 @@ func ParseDuration(age string) (time.Duration, error) {
return parseDurationFromNow(age, time.Now) return parseDurationFromNow(age, time.Now)
} }
// ReadableString parses d into a human readable duration. // ReadableString parses d into a human-readable duration.
// Based on https://github.com/hako/durafmt // Based on https://github.com/hako/durafmt
func (d Duration) ReadableString() string { func (d Duration) ReadableString() string {
switch d { switch d {

View File

@ -124,7 +124,7 @@ func (l ListType) Filter(in *fs.DirEntries) {
// If maxLevel is < 0 then it will recurse indefinitely, else it will // If maxLevel is < 0 then it will recurse indefinitely, else it will
// only do maxLevel levels. // only do maxLevel levels.
// //
// If synthesizeDirs is set then for bucket based remotes it will // If synthesizeDirs is set then for bucket-based remotes it will
// synthesize directories from the file structure. This uses extra // synthesize directories from the file structure. This uses extra
// memory so don't set this if you don't need directories, likewise do // memory so don't set this if you don't need directories, likewise do
// set this if you are interested in directories. // set this if you are interested in directories.
@ -182,7 +182,7 @@ func listRwalk(ctx context.Context, f fs.Fs, path string, includeAll bool, maxLe
return walkErr return walkErr
} }
// dirMap keeps track of directories made for bucket based remotes. // dirMap keeps track of directories made for bucket-based remotes.
// true => directory has been sent // true => directory has been sent
// false => directory has been seen but not sent // false => directory has been seen but not sent
type dirMap struct { type dirMap struct {

View File

@ -783,7 +783,7 @@ func TestListR(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, []string{"dir/b"}, got) require.Equal(t, []string{"dir/b"}, got)
// Now bucket based // Now bucket-based
objects = fs.DirEntries{ objects = fs.DirEntries{
mockobject.Object("a"), mockobject.Object("a"),
mockobject.Object("b"), mockobject.Object("b"),

View File

@ -491,14 +491,14 @@ func Run(t *testing.T, opt *Opt) {
assert.True(t, len(fsInfo.CommandHelp) > 0, "Command is declared, must return some help in CommandHelp") assert.True(t, len(fsInfo.CommandHelp) > 0, "Command is declared, must return some help in CommandHelp")
}) })
// TestFsRmdirNotFound tests deleting a non existent directory // TestFsRmdirNotFound tests deleting a non-existent directory
t.Run("FsRmdirNotFound", func(t *testing.T) { t.Run("FsRmdirNotFound", func(t *testing.T) {
skipIfNotOk(t) skipIfNotOk(t)
if isBucketBasedButNotRoot(f) { if isBucketBasedButNotRoot(f) {
t.Skip("Skipping test as non root bucket based remote") t.Skip("Skipping test as non root bucket-based remote")
} }
err := f.Rmdir(ctx, "") err := f.Rmdir(ctx, "")
assert.Error(t, err, "Expecting error on Rmdir non existent") assert.Error(t, err, "Expecting error on Rmdir non-existent")
}) })
// Make the directory // Make the directory
@ -1258,7 +1258,7 @@ func Run(t *testing.T, opt *Opt) {
t.Run("FsRmdirFull", func(t *testing.T) { t.Run("FsRmdirFull", func(t *testing.T) {
skipIfNotOk(t) skipIfNotOk(t)
if isBucketBasedButNotRoot(f) { if isBucketBasedButNotRoot(f) {
t.Skip("Skipping test as non root bucket based remote") t.Skip("Skipping test as non root bucket-based remote")
} }
err := f.Rmdir(ctx, "") err := f.Rmdir(ctx, "")
require.Error(t, err, "Expecting error on RMdir on non empty remote") require.Error(t, err, "Expecting error on RMdir on non empty remote")
@ -1959,7 +1959,7 @@ func Run(t *testing.T, opt *Opt) {
purged = true purged = true
fstest.CheckListing(t, f, []fstest.Item{}) fstest.CheckListing(t, f, []fstest.Item{})
// Check purging again if not bucket based // Check purging again if not bucket-based
if !isBucketBasedButNotRoot(f) { if !isBucketBasedButNotRoot(f) {
err = operations.Purge(ctx, f, "") err = operations.Purge(ctx, f, "")
assert.Error(t, err, "Expecting error after on second purge") assert.Error(t, err, "Expecting error after on second purge")

View File

@ -25,7 +25,7 @@ type Test struct {
// Backend describes a backend test // Backend describes a backend test
// //
// FIXME make bucket based remotes set sub-dir automatically??? // FIXME make bucket-based remotes set sub-dir automatically???
type Backend struct { type Backend struct {
Backend string // name of the backend directory Backend string // name of the backend directory
Remote string // name of the test remote Remote string // name of the test remote

View File

@ -123,7 +123,7 @@ func (r *Report) RecordResult(t *Run) {
} }
} }
// Title returns a human readable summary title for the Report // Title returns a human-readable summary title for the Report
func (r *Report) Title() string { func (r *Report) Title() string {
if r.AllPassed() { if r.AllPassed() {
return fmt.Sprintf("PASS: All tests finished OK in %v", r.Duration) return fmt.Sprintf("PASS: All tests finished OK in %v", r.Duration)

View File

@ -1,4 +1,4 @@
// Package bucket is contains utilities for managing bucket based backends // Package bucket is contains utilities for managing bucket-based backends
package bucket package bucket
import ( import (

View File

@ -158,7 +158,7 @@ func TestCachePin(t *testing.T) {
_, err := c.Get("/", create) _, err := c.Get("/", create)
require.NoError(t, err) require.NoError(t, err)
// Pin a non existent item to show nothing happens // Pin a non-existent item to show nothing happens
c.Pin("notfound") c.Pin("notfound")
c.mu.Lock() c.mu.Lock()
@ -312,7 +312,7 @@ func TestCacheRename(t *testing.T) {
assert.Equal(t, 2, c.Entries()) assert.Equal(t, 2, c.Entries())
// rename to non existent // rename to non-existent
value, found := c.Rename("existing1", "EXISTING1") value, found := c.Rename("existing1", "EXISTING1")
assert.Equal(t, true, found) assert.Equal(t, true, found)
assert.Equal(t, existing1, value) assert.Equal(t, existing1, value)
@ -326,7 +326,7 @@ func TestCacheRename(t *testing.T) {
assert.Equal(t, 1, c.Entries()) assert.Equal(t, 1, c.Entries())
// rename non existent // rename non-existent
value, found = c.Rename("notfound", "NOTFOUND") value, found = c.Rename("notfound", "NOTFOUND")
assert.Equal(t, false, found) assert.Equal(t, false, found)
assert.Nil(t, value) assert.Nil(t, value)

View File

@ -15,7 +15,7 @@ import (
// and b will be set. // and b will be set.
// //
// This is useful for copying between almost identical structures that // This is useful for copying between almost identical structures that
// are frequently present in auto generated code for cloud storage // are frequently present in auto-generated code for cloud storage
// interfaces. // interfaces.
func SetFrom(a, b interface{}) { func SetFrom(a, b interface{}) {
ta := reflect.TypeOf(a).Elem() ta := reflect.TypeOf(a).Elem()

View File

@ -585,7 +585,7 @@ func TestDirRename(t *testing.T) {
"renamed empty directory,0,true", "renamed empty directory,0,true",
}) })
// ...we don't check the underlying f.Fremote because on // ...we don't check the underlying f.Fremote because on
// bucket based remotes the directory won't be there // bucket-based remotes the directory won't be there
// read only check // read only check
vfs.Opt.ReadOnly = true vfs.Opt.ReadOnly = true

View File

@ -603,7 +603,7 @@ func TestCacheRename(t *testing.T) {
assertPathNotExist(t, osPathMeta) assertPathNotExist(t, osPathMeta)
assert.False(t, c.Exists("sub/newPotato")) assert.False(t, c.Exists("sub/newPotato"))
// non existent file - is ignored // non-existent file - is ignored
assert.NoError(t, c.Rename("nonexist", "nonexist2", nil)) assert.NoError(t, c.Rename("nonexist", "nonexist2", nil))
} }